DiffEditor: Boosting Accuracy and Flexibility on Diffusion-based Image
Editing
- URL: http://arxiv.org/abs/2402.02583v1
- Date: Sun, 4 Feb 2024 18:50:29 GMT
- Title: DiffEditor: Boosting Accuracy and Flexibility on Diffusion-based Image
Editing
- Authors: Chong Mou, Xintao Wang, Jiechong Song, Ying Shan, Jian Zhang
- Abstract summary: Large-scale Text-to-Image (T2I) diffusion models have revolutionized image generation over the last few years.
We propose DiffEditor to rectify two weaknesses in existing diffusion-based image editing.
Our method can efficiently achieve state-of-the-art performance on various fine-grained image editing tasks.
- Score: 66.43179841884098
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large-scale Text-to-Image (T2I) diffusion models have revolutionized image
generation over the last few years. Although owning diverse and high-quality
generation capabilities, translating these abilities to fine-grained image
editing remains challenging. In this paper, we propose DiffEditor to rectify
two weaknesses in existing diffusion-based image editing: (1) in complex
scenarios, editing results often lack editing accuracy and exhibit unexpected
artifacts; (2) lack of flexibility to harmonize editing operations, e.g.,
imagine new content. In our solution, we introduce image prompts in
fine-grained image editing, cooperating with the text prompt to better describe
the editing content. To increase the flexibility while maintaining content
consistency, we locally combine stochastic differential equation (SDE) into the
ordinary differential equation (ODE) sampling. In addition, we incorporate
regional score-based gradient guidance and a time travel strategy into the
diffusion sampling, further improving the editing quality. Extensive
experiments demonstrate that our method can efficiently achieve
state-of-the-art performance on various fine-grained image editing tasks,
including editing within a single image (e.g., object moving, resizing, and
content dragging) and across images (e.g., appearance replacing and object
pasting). Our source code is released at
https://github.com/MC-E/DragonDiffusion.
Related papers
- Achieving Complex Image Edits via Function Aggregation with Diffusion Models [15.509233098264513]
Diffusion models have demonstrated strong performance in generative tasks, making them ideal candidates for image editing.
We introduce FunEditor, an efficient diffusion model designed to learn atomic editing functions and perform complex edits by aggregating simpler functions.
FunEditor is 5 to 24 times faster inference than existing methods on complex tasks like object movement.
arXiv Detail & Related papers (2024-08-16T02:33:55Z) - TurboEdit: Text-Based Image Editing Using Few-Step Diffusion Models [53.757752110493215]
We focus on a popular line of text-based editing frameworks - the edit-friendly'' DDPM-noise inversion approach.
We analyze its application to fast sampling methods and categorize its failures into two classes: the appearance of visual artifacts, and insufficient editing strength.
We propose a pseudo-guidance approach that efficiently increases the magnitude of edits without introducing new artifacts.
arXiv Detail & Related papers (2024-08-01T17:27:28Z) - I2VEdit: First-Frame-Guided Video Editing via Image-to-Video Diffusion Models [18.36472998650704]
We introduce a novel and generic solution that extends the applicability of image editing tools to videos by propagating edits from a single frame to the entire video using a pre-trained image-to-video model.
Our method, dubbed I2VEdit, adaptively preserves the visual and motion integrity of the source video depending on the extent of the edits.
arXiv Detail & Related papers (2024-05-26T11:47:40Z) - Optimisation-Based Multi-Modal Semantic Image Editing [58.496064583110694]
We propose an inference-time editing optimisation to accommodate multiple editing instruction types.
By allowing to adjust the influence of each loss function, we build a flexible editing solution that can be adjusted to user preferences.
We evaluate our method using text, pose and scribble edit conditions, and highlight our ability to achieve complex edits.
arXiv Detail & Related papers (2023-11-28T15:31:11Z) - InFusion: Inject and Attention Fusion for Multi Concept Zero-Shot
Text-based Video Editing [27.661609140918916]
InFusion is a framework for zero-shot text-based video editing.
It supports editing of multiple concepts with pixel-level control over diverse concepts mentioned in the editing prompt.
Our framework is a low-cost alternative to one-shot tuned models for editing since it does not require training.
arXiv Detail & Related papers (2023-07-22T17:05:47Z) - LayerDiffusion: Layered Controlled Image Editing with Diffusion Models [5.58892860792971]
LayerDiffusion is a semantic-based layered controlled image editing method.
We leverage a large-scale text-to-image model and employ a layered controlled optimization strategy.
Experimental results demonstrate the effectiveness of our method in generating highly coherent images.
arXiv Detail & Related papers (2023-05-30T01:26:41Z) - Zero-shot Image-to-Image Translation [57.46189236379433]
We propose pix2pix-zero, an image-to-image translation method that can preserve the original image without manual prompting.
We propose cross-attention guidance, which aims to retain the cross-attention maps of the input image throughout the diffusion process.
Our method does not need additional training for these edits and can directly use the existing text-to-image diffusion model.
arXiv Detail & Related papers (2023-02-06T18:59:51Z) - DiffEdit: Diffusion-based semantic image editing with mask guidance [64.555930158319]
DiffEdit is a method to take advantage of text-conditioned diffusion models for the task of semantic image editing.
Our main contribution is able to automatically generate a mask highlighting regions of the input image that need to be edited.
arXiv Detail & Related papers (2022-10-20T17:16:37Z) - EditGAN: High-Precision Semantic Image Editing [120.49401527771067]
EditGAN is a novel method for high quality, high precision semantic image editing.
We show that EditGAN can manipulate images with an unprecedented level of detail and freedom.
We can also easily combine multiple edits and perform plausible edits beyond EditGAN training data.
arXiv Detail & Related papers (2021-11-04T22:36:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.