DyME: Dynamic Multi-Concept Erasure in Diffusion Models with Bi-Level Orthogonal LoRA Adaptation
- URL: http://arxiv.org/abs/2509.21433v1
- Date: Thu, 25 Sep 2025 15:16:17 GMT
- Title: DyME: Dynamic Multi-Concept Erasure in Diffusion Models with Bi-Level Orthogonal LoRA Adaptation
- Authors: Jiaqi Liu, Lan Zhang, Xiaoyong Yuan,
- Abstract summary: Text-to-image diffusion models inadvertently reproduce copyrighted styles and protected visual concepts, raising legal and ethical concerns.<n> Concept erasure has emerged as a safeguard, aiming to selectively suppress such concepts through fine-tuning.<n>We propose DyME, an on-demand erasure framework that trains lightweight, concept-specific LoRA adapters and dynamically composes only those needed at inference.
- Score: 11.480659591569308
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Text-to-image diffusion models (DMs) inadvertently reproduce copyrighted styles and protected visual concepts, raising legal and ethical concerns. Concept erasure has emerged as a safeguard, aiming to selectively suppress such concepts through fine-tuning. However, existing methods do not scale to practical settings where providers must erase multiple and possibly conflicting concepts. The core bottleneck is their reliance on static erasure: a single checkpoint is fine-tuned to remove all target concepts, regardless of the actual erasure needs at inference. This rigid design mismatches real-world usage, where requests vary per generation, leading to degraded erasure success and reduced fidelity for non-target content. We propose DyME, an on-demand erasure framework that trains lightweight, concept-specific LoRA adapters and dynamically composes only those needed at inference. This modular design enables flexible multi-concept erasure, but naive composition causes interference among adapters, especially when many or semantically related concepts are suppressed. To overcome this, we introduce bi-level orthogonality constraints at both the feature and parameter levels, disentangling representation shifts and enforcing orthogonal adapter subspaces. We further develop ErasureBench-H, a new hierarchical benchmark with brand-series-character structure, enabling principled evaluation across semantic granularities and erasure set sizes. Experiments on ErasureBench-H and standard datasets (e.g., CIFAR-100, Imagenette) demonstrate that DyME consistently outperforms state-of-the-art baselines, achieving higher multi-concept erasure fidelity with minimal collateral degradation.
Related papers
- Differential Vector Erasure: Unified Training-Free Concept Erasure for Flow Matching Models [49.10620605347065]
We propose Differential Vector Erasure (DVE), a training-free concept erasure method specifically designed for flow matching models.<n>Our key insight is that semantic concepts are implicitly encoded in the directional structure of the velocity field governing the generative flow.<n>During inference, DVE selectively removes concept-specific components by projecting the velocity field onto the differential direction, enabling precise concept suppression without affecting irrelevant semantics.
arXiv Detail & Related papers (2026-02-01T08:05:45Z) - GrOCE:Graph-Guided Online Concept Erasure for Text-to-Image Diffusion Models [24.278300091974085]
Concept erasure aims to remove harmful, inappropriate, or copyrighted content from text-to-image diffusion models.<n>We propose Graph-Guided Online Concept Erasure (GrOCE), a training-free framework that performs precise and adaptive concept removal.
arXiv Detail & Related papers (2025-11-17T04:47:16Z) - CGCE: Classifier-Guided Concept Erasure in Generative Models [53.7410000675294]
Concept erasure has been developed to remove undesirable concepts from pre-trained models.<n>Existing methods remain vulnerable to adversarial attacks that can regenerate the erased content.<n>We introduce an efficient plug-and-play framework that provides robust concept erasure for diverse generative models.
arXiv Detail & Related papers (2025-11-08T05:38:18Z) - EAR: Erasing Concepts from Unified Autoregressive Models [3.55166983092355]
We propose Erasure Autoregressive Model (EAR), a fine-tuning method for effective and utility-preserving concept erasure in AR models.<n>Specifically, we introduce Windowed Gradient Accumulation (WGA) strategy to align patch-level decoding with erasure objectives.<n>We also propose a novel benchmark, Erase Concept Generator and Visual Filter (ECGVF), aim at provide a more rigorous and comprehensive foundation for evaluating concept erasure in AR models.
arXiv Detail & Related papers (2025-06-25T06:15:07Z) - Interpretable Few-Shot Image Classification via Prototypical Concept-Guided Mixture of LoRA Experts [79.18608192761512]
Self-Explainable Models (SEMs) rely on Prototypical Concept Learning (PCL) to enable their visual recognition processes more interpretable.<n>We propose a Few-Shot Prototypical Concept Classification framework that mitigates two key challenges under low-data regimes: parametric imbalance and representation misalignment.<n>Our approach consistently outperforms existing SEMs by a notable margin, with 4.2%-8.7% relative gains in 5-way 5-shot classification.
arXiv Detail & Related papers (2025-06-05T06:39:43Z) - Set You Straight: Auto-Steering Denoising Trajectories to Sidestep Unwanted Concepts [12.04985139116705]
We introduce a finetuning framework, dubbed ANT, which guides deNoising Trajectories to avoid unwanted concepts.<n>ANT is built on a key insight: reversing the condition direction of classifier-free guidance during mid-to-late denoising stages.<n>For single-concept erasure, we propose an augmentation-enhanced weight saliency map, enabling more thorough and efficient erasure.<n>For multi-concept erasure, our objective function offers a versatile plug-and-play solution that significantly boosts performance.
arXiv Detail & Related papers (2025-04-17T09:29:30Z) - CRCE: Coreference-Retention Concept Erasure in Text-to-Image Diffusion Models [19.205261933636645]
We introduce CRCE, a novel concept erasure framework.<n>By explicitly modelling coreferential and retained concepts semantically, CRCE enables more precise concept removal.<n>Experiments demonstrate that CRCE outperforms existing methods on diverse erasure tasks.
arXiv Detail & Related papers (2025-03-18T13:09:01Z) - Modular Customization of Diffusion Models via Blockwise-Parameterized Low-Rank Adaptation [73.16975077770765]
Modular customization is essential for applications like concept stylization and multi-concept customization.<n>Instant merging methods often cause identity loss and interference of individual merged concepts.<n>We propose BlockLoRA, an instant merging method designed to efficiently combine multiple concepts while accurately preserving individual concepts' identity.
arXiv Detail & Related papers (2025-03-11T16:10:36Z) - SPEED: Scalable, Precise, and Efficient Concept Erasure for Diffusion Models [56.83154571623655]
We introduce SPEED, an efficient concept erasure approach that directly edits model parameters.<n>Speedy searches for a null space, a model editing space where parameter updates do not affect non-target concepts.<n>We successfully erase 100 concepts within only 5 seconds.
arXiv Detail & Related papers (2025-03-10T14:40:01Z) - DuMo: Dual Encoder Modulation Network for Precise Concept Erasure [75.05165577219425]
We propose our Dual encoder Modulation network (DuMo) which achieves precise erasure of inappropriate target concepts with minimum impairment to non-target concepts.<n>Our method achieves state-of-the-art performance on Explicit Content Erasure, Cartoon Concept Removal and Artistic Style Erasure, clearly outperforming alternative methods.
arXiv Detail & Related papers (2025-01-02T07:47:34Z) - Reliable and Efficient Concept Erasure of Text-to-Image Diffusion Models [76.39651111467832]
We introduce Reliable and Efficient Concept Erasure (RECE), a novel approach that modifies the model in 3 seconds without necessitating additional fine-tuning.
To mitigate inappropriate content potentially represented by derived embeddings, RECE aligns them with harmless concepts in cross-attention layers.
The derivation and erasure of new representation embeddings are conducted iteratively to achieve a thorough erasure of inappropriate concepts.
arXiv Detail & Related papers (2024-07-17T08:04:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.