Blended Diffusion for Text-driven Editing of Natural Images
- URL: http://arxiv.org/abs/2111.14818v1
- Date: Mon, 29 Nov 2021 18:58:49 GMT
- Title: Blended Diffusion for Text-driven Editing of Natural Images
- Authors: Omri Avrahami, Dani Lischinski, Ohad Fried
- Abstract summary: We introduce the first solution for performing local (region-based) edits in generic natural images.
We achieve our goal by leveraging and combining a pretrained language-image model (CLIP)
To seamlessly fuse the edited region with the unchanged parts of the image, we spatially blend noised versions of the input image with the local text-guided diffusion latent.
- Score: 18.664733153082146
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Natural language offers a highly intuitive interface for image editing. In
this paper, we introduce the first solution for performing local (region-based)
edits in generic natural images, based on a natural language description along
with an ROI mask. We achieve our goal by leveraging and combining a pretrained
language-image model (CLIP), to steer the edit towards a user-provided text
prompt, with a denoising diffusion probabilistic model (DDPM) to generate
natural-looking results. To seamlessly fuse the edited region with the
unchanged parts of the image, we spatially blend noised versions of the input
image with the local text-guided diffusion latent at a progression of noise
levels. In addition, we show that adding augmentations to the diffusion process
mitigates adversarial results. We compare against several baselines and related
methods, both qualitatively and quantitatively, and show that our method
outperforms these solutions in terms of overall realism, ability to preserve
the background and matching the text. Finally, we show several text-driven
editing applications, including adding a new object to an image,
removing/replacing/altering existing objects, background replacement, and image
extrapolation.
Related papers
- EditScout: Locating Forged Regions from Diffusion-based Edited Images with Multimodal LLM [50.054404519821745]
We present a novel framework that integrates a multimodal Large Language Model for enhanced reasoning capabilities.
Our framework achieves promising results on MagicBrush, AutoSplice, and PerfBrush datasets.
Notably, our method excels on the PerfBrush dataset, a self-constructed test set featuring previously unseen types of edits.
arXiv Detail & Related papers (2024-12-05T02:05:33Z) - TurboEdit: Text-Based Image Editing Using Few-Step Diffusion Models [53.757752110493215]
We focus on a popular line of text-based editing frameworks - the edit-friendly'' DDPM-noise inversion approach.
We analyze its application to fast sampling methods and categorize its failures into two classes: the appearance of visual artifacts, and insufficient editing strength.
We propose a pseudo-guidance approach that efficiently increases the magnitude of edits without introducing new artifacts.
arXiv Detail & Related papers (2024-08-01T17:27:28Z) - DiffUHaul: A Training-Free Method for Object Dragging in Images [78.93531472479202]
We propose a training-free method, dubbed DiffUHaul, for the object dragging task.
We first apply attention masking in each denoising step to make the generation more disentangled across different objects.
In the early denoising steps, we interpolate the attention features between source and target images to smoothly fuse new layouts with the original appearance.
arXiv Detail & Related papers (2024-06-03T17:59:53Z) - Dynamic Prompt Learning: Addressing Cross-Attention Leakage for
Text-Based Image Editing [23.00202969969574]
We propose Dynamic Prompt Learning (DPL) to force cross-attention maps to focus on correct noun words in the text prompt.
We show improved prompt editing results for Word-Swap, Prompt Refinement, and Attention Re-weighting, especially for complex multi-object scenes.
arXiv Detail & Related papers (2023-09-27T13:55:57Z) - Textual and Visual Prompt Fusion for Image Editing via Step-Wise Alignment [10.82748329166797]
We propose a framework that integrates a fusion of generated visual references and text guidance into the semantic latent space of a textitfrozen pre-trained diffusion model.
Using only a tiny neural network, our framework provides control over diverse content and attributes, driven intuitively by the text prompt.
arXiv Detail & Related papers (2023-08-30T08:40:15Z) - iEdit: Localised Text-guided Image Editing with Weak Supervision [53.082196061014734]
We propose a novel learning method for text-guided image editing.
It generates images conditioned on a source image and a textual edit prompt.
It shows favourable results against its counterparts in terms of image fidelity, CLIP alignment score and qualitatively for editing both generated and real images.
arXiv Detail & Related papers (2023-05-10T07:39:14Z) - SpaText: Spatio-Textual Representation for Controllable Image Generation [61.89548017729586]
SpaText is a new method for text-to-image generation using open-vocabulary scene control.
In addition to a global text prompt that describes the entire scene, the user provides a segmentation map.
We show its effectiveness on two state-of-the-art diffusion models: pixel-based and latent-conditional-based.
arXiv Detail & Related papers (2022-11-25T18:59:10Z) - DiffEdit: Diffusion-based semantic image editing with mask guidance [64.555930158319]
DiffEdit is a method to take advantage of text-conditioned diffusion models for the task of semantic image editing.
Our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited.
arXiv Detail & Related papers (2022-10-20T17:16:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.