Mitigating Catastrophic Forgetting and Mode Collapse in Text-to-Image Diffusion via Latent Replay
- URL: http://arxiv.org/abs/2509.10529v1
- Date: Thu, 04 Sep 2025 23:45:22 GMT
- Title: Mitigating Catastrophic Forgetting and Mode Collapse in Text-to-Image Diffusion via Latent Replay
- Authors: Aoi Otani,
- Abstract summary: Continual learning is fundamental to natural intelligence.<n>"catastrophic forgetting" occurs when learning new tasks erases previously acquired knowledge.<n>Latent Replay enables efficient continual learning for generative AI models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual learning -- the ability to acquire knowledge incrementally without forgetting previous skills -- is fundamental to natural intelligence. While the human brain excels at this, artificial neural networks struggle with "catastrophic forgetting," where learning new tasks erases previously acquired knowledge. This challenge is particularly severe for text-to-image diffusion models, which generate images from textual prompts. Additionally, these models face "mode collapse," where their outputs become increasingly repetitive over time. To address these challenges, we apply Latent Replay, a neuroscience-inspired approach, to diffusion models. Traditional replay methods mitigate forgetting by storing and revisiting past examples, typically requiring large collections of images. Latent Replay instead retains only compact, high-level feature representations extracted from the model's internal architecture. This mirrors the hippocampal process of storing neural activity patterns rather than raw sensory inputs, reducing memory usage while preserving critical information. Through experiments with five sequentially learned visual concepts, we demonstrate that Latent Replay significantly outperforms existing methods in maintaining model versatility. After learning all concepts, our approach retained 77.59% Image Alignment (IA) on the earliest concept, 14% higher than baseline methods, while maintaining diverse outputs. Surprisingly, random selection of stored latent examples outperforms similarity-based strategies. Our findings suggest that Latent Replay enables efficient continual learning for generative AI models, paving the way for personalized text-to-image models that evolve with user needs without excessive computational costs.
Related papers
- Boosting Alignment for Post-Unlearning Text-to-Image Generative Models [55.82190434534429]
Large-scale generative models have shown impressive image-generation capabilities, propelled by massive data.<n>This often inadvertently leads to the generation of harmful or inappropriate content and raises copyright concerns.<n>We propose a framework that seeks an optimal model update at each unlearning iteration, ensuring monotonic improvement on both objectives.
arXiv Detail & Related papers (2024-12-09T21:36:10Z) - Towards Lifelong Few-Shot Customization of Text-to-Image Diffusion [50.26583654615212]
Lifelong few-shot customization for text-to-image diffusion aims to continually generalize existing models for new tasks with minimal data.
In this study, we identify and categorize the catastrophic forgetting problems into two folds: relevant concepts forgetting and previous concepts forgetting.
Unlike existing methods that rely on additional real data or offline replay of original concept data, our approach enables on-the-fly knowledge distillation to retain the previous concepts while learning new ones.
arXiv Detail & Related papers (2024-11-08T12:58:48Z) - Accurate Forgetting for All-in-One Image Restoration Model [3.367455972998532]
Currently, a low-cost scheme called Machine Unlearning forgets the private data remembered in the model.
Inspired by this, we try to use this concept to bridge the gap between the fields of image restoration and security.
arXiv Detail & Related papers (2024-09-01T10:14:16Z) - Memorized Images in Diffusion Models share a Subspace that can be Located and Deleted [15.162296378581853]
Large-scale text-to-image diffusion models excel in generating high-quality images from textual inputs.
Concerns arise as research indicates their tendency to memorize and replicate training data.
Efforts within the text-to-image community to address memorization explore causes such as data duplication, replicated captions, or trigger tokens.
arXiv Detail & Related papers (2024-06-01T15:47:13Z) - Unveiling and Mitigating Memorization in Text-to-image Diffusion Models through Cross Attention [62.671435607043875]
Research indicates that text-to-image diffusion models replicate images from their training data, raising tremendous concerns about potential copyright infringement and privacy risks.<n>We reveal that during memorization, the cross-attention tends to focus disproportionately on the embeddings of specific tokens.<n>We introduce an innovative approach to detect and mitigate memorization in diffusion models.
arXiv Detail & Related papers (2024-03-17T01:27:00Z) - Selective Amnesia: A Continual Learning Approach to Forgetting in Deep
Generative Models [12.188240438657512]
We derive a technique inspired by continual learning to selectively forget concepts in pretrained deep generative models.
Our method, dubbed Selective Amnesia, enables controllable forgetting where a user can specify how a concept should be forgotten.
arXiv Detail & Related papers (2023-05-17T10:53:58Z) - ResMem: Learn what you can and memorize the rest [79.19649788662511]
We propose the residual-memorization (ResMem) algorithm to augment an existing prediction model.
By construction, ResMem can explicitly memorize the training labels.
We show that ResMem consistently improves the test set generalization of the original prediction model.
arXiv Detail & Related papers (2023-02-03T07:12:55Z) - Associative Memories via Predictive Coding [37.59398215921529]
Associative memories in the brain receive and store patterns of activity registered by the sensory neurons.
We present a novel neural model for realizing associative memories based on a hierarchical generative network that receives external stimuli via sensory neurons.
arXiv Detail & Related papers (2021-09-16T15:46:26Z) - Embracing New Techniques in Deep Learning for Estimating Image
Memorability [0.0]
We propose and evaluate five alternative deep learning models to predict image memorability.
Our findings suggest that the key prior memorability network had overstated its generalizability and was overfit on its training set.
We make our new state-of-the-art model readily available to the research community, allowing memory researchers to make predictions about memorability on a wider range of images.
arXiv Detail & Related papers (2021-05-21T23:05:23Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Neural Sparse Representation for Image Restoration [116.72107034624344]
Inspired by the robustness and efficiency of sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
Our method structurally enforces sparsity constraints upon hidden neurons.
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks.
arXiv Detail & Related papers (2020-06-08T05:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.