PromptCCD: Learning Gaussian Mixture Prompt Pool for Continual Category Discovery

ECCV 2024

1Visual AI Lab, The University of Hong Kong
2The University of Edinburgh

In Continual Category Discovery (CCD), the model receives the labelled set at the initial stage and is tasked to discover categories from the unlabelled data in the subsequent stages. There are two major challenges in CCD: (1) catastrophic forgetting, a well-known issue in continual learning, and (2) discovering novel visual concepts for both known and novel categories in the unlabelled data.

Abstract

We tackle the problem of Continual Category Discovery (CCD), which aims to automatically discover novel categories in a continuous stream of unlabeled data while mitigating the challenge of catastrophic forgetting – an open problem that persists even in conventional, fully supervised continual learning.

To address this challenge, we propose PromptCCD, a simple yet effective framework that utilizes a Gaussian Mixture Model (GMM) as a prompting method for CCD. At the core of PromptCCD lies the Gaussian Mixture Prompting (GMP) module, which acts as a dynamic pool that updates over time to facilitate representation learning and prevent forgetting during category discovery. Moreover, GMP enables on-the-fly estimation of category numbers, allowing PromptCCD to discover categories in unlabeled data without prior knowledge of the category numbers. We extend the standard evaluation metric for Generalized Category Discovery (GCD) to CCD and benchmark state-of-the-art methods on diverse public datasets. PromptCCD significantly outperforms existing methods, demonstrating its effectiveness.

Framework


Overview of our proposed PromptCCD framework and Gaussian Mixture Prompting (GMP) module. PromptCCD continually discovers new categories while retaining previously discovered ones by learning a dynamic GMP pool to adapt the vision foundation model for CCD. Specifically, we address CCD by making use of GMP modules to estimate the probability of input class token's features by calculating the log-likelihood and use the top-k mean of GMM components as prompts to guide the foundation model. Lastly, to retain previously learned prompts, we generate prototype samples from the fitted GMM at time step t − 1 and fit the current GMM with these samples at time step t.

Performance

We evaluate PromptCCD w/ GMP on various benchmark datasets, i.e., CIFAR100, ImageNet-100, TinyImageNet, and Caltech-101 for generic datasets and Aircraft, Stanford Cars, and CUB for fine-grained datasets. We also compare our method with representative CCD methods and re-implement SoTA GCD and supervised continual learning methods to fit into our task. The results are shown below. We can see that our method consistently outperforms all methods.


BibTeX


    @inproceedings{cendra2024promptccd,
      author    = {Fernando Julio Cendra and Bingchen Zhao and Kai Han},
      title     = {PromptCCD: Learning Gaussian Mixture Prompt Pool for Continual Category Discovery},
      booktitle = {European Conference on Computer Vision},
      year      = {2024}
    }