CLoG: Benchmarking Continual Learning of Image Generation Models
- URL: http://arxiv.org/abs/2406.04584v1
- Date: Fri, 7 Jun 2024 02:12:29 GMT
- Title: CLoG: Benchmarking Continual Learning of Image Generation Models
- Authors: Haotian Zhang, Junting Zhou, Haowei Lin, Hang Ye, Jianhua Zhu, Zihao Wang, Liangcai Gao, Yizhou Wang, Yitao Liang,
- Abstract summary: This paper advocates for shifting the research focus from classification-based CL to CLoG.
We adapt three types of existing CL methodologies, replay-based, regularization-based, and parameter-isolation-based methods to generative tasks.
Our benchmarks and results yield intriguing insights that can be valuable for developing future CLoG methods.
- Score: 29.337710309698515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continual Learning (CL) poses a significant challenge in Artificial Intelligence, aiming to mirror the human ability to incrementally acquire knowledge and skills. While extensive research has focused on CL within the context of classification tasks, the advent of increasingly powerful generative models necessitates the exploration of Continual Learning of Generative models (CLoG). This paper advocates for shifting the research focus from classification-based CL to CLoG. We systematically identify the unique challenges presented by CLoG compared to traditional classification-based CL. We adapt three types of existing CL methodologies, replay-based, regularization-based, and parameter-isolation-based methods to generative tasks and introduce comprehensive benchmarks for CLoG that feature great diversity and broad task coverage. Our benchmarks and results yield intriguing insights that can be valuable for developing future CLoG methods. Additionally, we will release a codebase designed to facilitate easy benchmarking and experimentation in CLoG publicly at https://github.com/linhaowei1/CLoG. We believe that shifting the research focus to CLoG will benefit the continual learning community and illuminate the path for next-generation AI-generated content (AIGC) in a lifelong learning paradigm.
Related papers
- CLDyB: Towards Dynamic Benchmarking for Continual Learning with Pre-trained Models [22.032582616029707]
We describe CL on dynamic benchmarks (CLDyB), a general computational framework for evaluating CL methods reliably.
We first conduct a joint evaluation of multiple state-of-the-art CL methods, leading to a set of commonly challenging and generalizable task sequences.
We then conduct separate evaluations of individual CL methods using CLDyB, discovering their respective strengths and weaknesses.
arXiv Detail & Related papers (2025-03-06T17:49:13Z) - Continual Learning Should Move Beyond Incremental Classification [51.23416308775444]
Continual learning (CL) is the sub-field of machine learning concerned with accumulating knowledge in dynamic environments.
Here, we argue that maintaining such a focus limits both theoretical development and practical applicability of CL methods.
We identify three fundamental challenges: (C1) the nature of continuity in learning problems, (C2) the choice of appropriate spaces and metrics for measuring similarity, and (C3) the role of learning objectives beyond classification.
arXiv Detail & Related papers (2025-02-17T15:40:13Z) - Position: Continual Learning Benefits from An Evolving Population over An Unified Model [4.348086726793516]
This study introduces a novel Population-based Continual Learning (PCL) framework.
PCL extends Continual Learning to the architectural level by maintaining and evolving a population of neural network architectures.
PCL outperforms state-of-the-art rehearsal-free CL methods that employs a unified model.
arXiv Detail & Related papers (2025-02-10T07:21:44Z) - High-Order Fusion Graph Contrastive Learning for Recommendation [16.02820746003461]
Graph contrastive learning (GCL)-based methods typically implement CL by creating contrastive views through various data augmentation techniques.
Existing CL-based methods use traditional CL objectives to capture self-supervised signals.
We propose a High-order Fusion Graph Contrastive Learning (HFGCL) framework for recommendation.
arXiv Detail & Related papers (2024-07-29T04:30:38Z) - CLEO: Continual Learning of Evolving Ontologies [12.18795037817058]
Continual learning (CL) aims to instill the lifelong learning of humans in intelligent systems.
General learning processes are not just limited to learning information, but also refinement of existing information.
CLEO is motivated by the need for intelligent systems to adapt to real-world changes over time.
arXiv Detail & Related papers (2024-07-11T11:32:33Z) - GCC: Generative Calibration Clustering [55.44944397168619]
We propose a novel Generative Clustering (GCC) method to incorporate feature learning and augmentation into clustering procedure.
First, we develop a discrimirative feature alignment mechanism to discover intrinsic relationship across real and generated samples.
Second, we design a self-supervised metric learning to generate more reliable cluster assignment.
arXiv Detail & Related papers (2024-04-14T01:51:11Z) - Exploiting CLIP for Zero-shot HOI Detection Requires Knowledge
Distillation at Multiple Levels [52.50670006414656]
We employ CLIP, a large-scale pre-trained vision-language model, for knowledge distillation on multiple levels.
To train our model, CLIP is utilized to generate HOI scores for both global images and local union regions.
The model achieves strong performance, which is even comparable with some fully-supervised and weakly-supervised methods.
arXiv Detail & Related papers (2023-09-10T16:27:54Z) - From MNIST to ImageNet and Back: Benchmarking Continual Curriculum
Learning [9.104068727716294]
Continual learning (CL) is one of the most promising trends in machine learning research.
We introduce two novel CL benchmarks that involve multiple heterogeneous tasks from six image datasets.
We additionally structure our benchmarks so that tasks are presented in increasing and decreasing order of complexity.
arXiv Detail & Related papers (2023-03-16T18:11:19Z) - Global Knowledge Calibration for Fast Open-Vocabulary Segmentation [124.74256749281625]
We introduce a text diversification strategy that generates a set of synonyms for each training category.
We also employ a text-guided knowledge distillation method to preserve the generalizable knowledge of CLIP.
Our proposed model achieves robust generalization performance across various datasets.
arXiv Detail & Related papers (2023-03-16T09:51:41Z) - Using Representation Expressiveness and Learnability to Evaluate
Self-Supervised Learning Methods [61.49061000562676]
We introduce Cluster Learnability (CL) to assess learnability.
CL is measured in terms of the performance of a KNN trained to predict labels obtained by clustering the representations with K-means.
We find that CL better correlates with in-distribution model performance than other competing recent evaluation schemes.
arXiv Detail & Related papers (2022-06-02T19:05:13Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - An Empirical Study of Graph Contrastive Learning [17.246488437677616]
Graph Contrastive Learning establishes a new paradigm for learning graph representations without human annotations.
We identify several critical design considerations within a general GCL paradigm, including augmentation functions, contrasting modes, contrastive objectives, and negative mining techniques.
To foster future research and ease the implementation of GCL algorithms, we develop an easy-to-use library PyGCL, featuring modularized CL components, standardized evaluation, and experiment management.
arXiv Detail & Related papers (2021-09-02T17:43:45Z) - A Survey on Curriculum Learning [48.36129047271622]
Curriculum learning (CL) is a training strategy that trains a machine learning model from easier data to harder data.
As an easy-to-use plug-in, the CL strategy has demonstrated its power in improving the generalization capacity and convergence rate of various models.
arXiv Detail & Related papers (2020-10-25T17:15:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.