Unified Concept Editing in Diffusion Models
- URL: http://arxiv.org/abs/2308.14761v1
- Date: Fri, 25 Aug 2023 17:59:59 GMT
- Title: Unified Concept Editing in Diffusion Models
- Authors: Rohit Gandikota, Hadas Orgad, Yonatan Belinkov, Joanna Materzy\'nska,
David Bau
- Abstract summary: We present a method that tackles all issues with a single approach.
Our method, Unified Concept Editing (UCE), edits the model without training using a closed-form solution.
We demonstrate scalable simultaneous debiasing, style erasure, and content moderation by editing text-to-image projections.
- Score: 58.69519244278023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text-to-image models suffer from various safety issues that may limit their
suitability for deployment. Previous methods have separately addressed
individual issues of bias, copyright, and offensive content in text-to-image
models. However, in the real world, all of these issues appear simultaneously
in the same model. We present a method that tackles all issues with a single
approach. Our method, Unified Concept Editing (UCE), edits the model without
training using a closed-form solution, and scales seamlessly to concurrent
edits on text-conditional diffusion models. We demonstrate scalable
simultaneous debiasing, style erasure, and content moderation by editing
text-to-image projections, and we present extensive experiments demonstrating
improved efficacy and scalability over prior work. Our code is available at
https://unified.baulab.info
Related papers
- CODE: Confident Ordinary Differential Editing [62.83365660727034]
Confident Ordinary Differential Editing (CODE) is a novel approach for image synthesis that effectively handles Out-of-Distribution (OoD) guidance images.
CODE enhances images through score-based updates along the probability-flow Ordinary Differential Equation (ODE) trajectory.
Our method operates in a fully blind manner, relying solely on a pre-trained generative model.
arXiv Detail & Related papers (2024-08-22T14:12:20Z) - Editing Massive Concepts in Text-to-Image Diffusion Models [58.620118104364174]
We propose a two-stage method, Editing Massive Concepts In Diffusion Models (EMCID)
The first stage performs memory optimization for each individual concept with dual self-distillation from text alignment loss and diffusion noise prediction loss.
The second stage conducts massive concept editing with multi-layer, closed form model editing.
arXiv Detail & Related papers (2024-03-20T17:59:57Z) - Localizing and Editing Knowledge in Text-to-Image Generative Models [62.02776252311559]
knowledge about different attributes is not localized in isolated components, but is instead distributed amongst a set of components in the conditional UNet.
We introduce a fast, data-free model editing method Diff-QuickFix which can effectively edit concepts in text-to-image models.
arXiv Detail & Related papers (2023-10-20T17:31:12Z) - InFusion: Inject and Attention Fusion for Multi Concept Zero-Shot
Text-based Video Editing [27.661609140918916]
InFusion is a framework for zero-shot text-based video editing.
It supports editing of multiple concepts with pixel-level control over diverse concepts mentioned in the editing prompt.
Our framework is a low-cost alternative to one-shot tuned models for editing since it does not require training.
arXiv Detail & Related papers (2023-07-22T17:05:47Z) - DiffUTE: Universal Text Editing Diffusion Model [32.384236053455]
We propose a universal self-supervised text editing diffusion model (DiffUTE)
It aims to replace or modify words in the source image with another one while maintaining its realistic appearance.
Our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity.
arXiv Detail & Related papers (2023-05-18T09:06:01Z) - Uncovering the Disentanglement Capability in Text-to-Image Diffusion
Models [60.63556257324894]
A key desired property of image generative models is the ability to disentangle different attributes.
We propose a simple, light-weight image editing algorithm where the mixing weights of the two text embeddings are optimized for style matching and content preservation.
Experiments show that the proposed method can modify a wide range of attributes, with the performance outperforming diffusion-model-based image-editing algorithms.
arXiv Detail & Related papers (2022-12-16T19:58:52Z) - SINE: SINgle Image Editing with Text-to-Image Diffusion Models [10.67527134198167]
This work aims to address the problem of single-image editing.
We propose a novel model-based guidance built upon the classifier-free guidance.
We show promising editing capabilities, including changing style, content addition, and object manipulation.
arXiv Detail & Related papers (2022-12-08T18:57:13Z) - Aging with GRACE: Lifelong Model Editing with Discrete Key-Value
Adaptors [53.819805242367345]
We propose GRACE, a lifelong model editing method, which implements spot-fixes on streaming errors of a deployed model.
GRACE writes new mappings into a pre-trained model's latent space, creating a discrete, local codebook of edits without altering model weights.
Our experiments on T5, BERT, and GPT models show GRACE's state-of-the-art performance in making and retaining edits, while generalizing to unseen inputs.
arXiv Detail & Related papers (2022-11-20T17:18:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.