ICE: Intrinsic Concept Extraction
from a Single Image via Diffusion Models

Visual AI Lab, The University of Hong Kong
In CVPR 2025
Conference highlight

Intrinsic Concept Extraction aims at extracting object-level concepts and the underlying intrinsic attributes such as semantic category, colour, and material. Intrinsic Concept Extraction provides a detailed and interpretable representation of visual elements, enabling a structured and comprehensive understanding of the image's components, allowing for versatile downstream generative applications.

Abstract

The inherent ambiguity in defining visual concepts poses significant challenges for modern generative models, such as the diffusion-based Text-to-Image (T2I) models, in accurately learning concepts from a single image. Existing methods lack a systematic way to reliably extract the interpretable underlying intrinsic concepts. To address this challenge, we present ICE, short for Intrinsic Concept Extraction, a novel framework that exclusively utilizes a T2I model to automatically and systematically extract intrinsic concepts from a single image. ICE consists of two pivotal stages. In the first stage, ICE devises an automatic concept localization module to pinpoint relevant text-based concepts and their corresponding masks within a single image. This critical stage streamlines concept initialization and provides precise guidance for subsequent analysis. The second stage delves deeper into each identified mask, decomposing concepts into intrinsic components that capture specific visual characteristics and general components representing broader categories. This decomposition facilitates a more granular understanding by dissecting concepts into detailed intrinsic attributes, such as color and material. Our framework demonstrates superior performance on intrinsic concept extraction from a single image in an unsupervised manner.

How we define visual concepts

Our proposed framework, ICE, offers a unified and structured approach to automatically and systematically discover intrinsic concepts within an image using a single T2I model. Unlike previous methods, ICE not only identifies object-level concepts but also decomposes them into intrinsic attributes such as colour and material (see right figure), providing a more comprehensive and interpretable representation of visual concepts.

ICE method illustration

ICE framework

Click to view detailed framework explanation

Qualitative results


Qualitative results of the ICE framework demonstrating its systematic concept discovery process. Column 1: Input images. Column 2: Extracted text-based concepts and their corresponding masks obtained from Stage One: Automatic Concept Localization. Columns 3 & 4: Generated images of learned object-level and intrinsic concepts derived from Stage Two: Structured Concept Learning.

BibTeX


        @inproceedings{cendra2025ICE,
          author    = {Fernando Julio Cendra and Kai Han},
          title     = {ICE: Intrinsic Concept Extraction from a Single Image via Diffusion Models},
          booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
          year      = {2025}
        }