The inherent ambiguity in defining visual concepts poses significant challenges for modern generative models, such as the diffusion-based Text-to-Image (T2I) models,
in accurately learning concepts from a single image. Existing methods lack a systematic way to reliably extract the interpretable underlying intrinsic concepts.
To address this challenge, we present ICE, short for Intrinsic Concept Extraction, a novel framework that exclusively utilizes a T2I model to automatically and systematically extract intrinsic concepts from a single image. ICE consists of two pivotal stages.
In the first stage, ICE devises an automatic concept localization module to pinpoint relevant text-based concepts and their corresponding masks within a single image. This critical stage streamlines concept initialization and provides precise guidance for subsequent analysis.
The second stage delves deeper into each identified mask, decomposing concepts into intrinsic components that capture specific visual characteristics and general components representing broader categories.
This decomposition facilitates a more granular understanding by dissecting concepts into detailed intrinsic attributes, such as color and material.
Our framework demonstrates superior performance on intrinsic concept extraction from a single image in an unsupervised manner.