DreamSteerer: Enhancing Source Image Conditioned Editability using Personalized Diffusion Models
- URL: http://arxiv.org/abs/2410.11208v2
- Date: Wed, 30 Oct 2024 01:16:45 GMT
- Title: DreamSteerer: Enhancing Source Image Conditioned Editability using Personalized Diffusion Models
- Authors: Zhengyang Yu, Zhaoyuan Yang, Jing Zhang,
- Abstract summary: Recent text-to-image personalization methods have shown great promise in teaching a diffusion model user-specified concepts.
A promising extension is personalized editing, namely to edit an image using personalized concepts.
We propose DreamSteerer, a plug-in method for augmenting existing T2I personalization methods.
- Score: 7.418186319496487
- License:
- Abstract: Recent text-to-image personalization methods have shown great promise in teaching a diffusion model user-specified concepts given a few images for reusing the acquired concepts in a novel context. With massive efforts being dedicated to personalized generation, a promising extension is personalized editing, namely to edit an image using personalized concepts, which can provide a more precise guidance signal than traditional textual guidance. To address this, a straightforward solution is to incorporate a personalized diffusion model with a text-driven editing framework. However, such a solution often shows unsatisfactory editability on the source image. To address this, we propose DreamSteerer, a plug-in method for augmenting existing T2I personalization methods. Specifically, we enhance the source image conditioned editability of a personalized diffusion model via a novel Editability Driven Score Distillation (EDSD) objective. Moreover, we identify a mode trapping issue with EDSD, and propose a mode shifting regularization with spatial feature guided sampling to avoid such an issue. We further employ two key modifications to the Delta Denoising Score framework that enable high-fidelity local editing with personalized concepts. Extensive experiments validate that DreamSteerer can significantly improve the editability of several T2I personalization baselines while being computationally efficient.
Related papers
- PartEdit: Fine-Grained Image Editing using Pre-Trained Diffusion Models [80.98455219375862]
We present the first text-based image editing approach for object parts based on pre-trained diffusion models.
Our approach is preferred by users 77-90% of the time in conducted user studies.
arXiv Detail & Related papers (2025-02-06T13:08:43Z) - PIXELS: Progressive Image Xemplar-based Editing with Latent Surgery [10.594261300488546]
We introduce a novel framework for progressive exemplar-driven editing with off-the-shelf diffusion models, dubbed PIXELS.
PIXELS provides granular control over edits, allowing adjustments at the pixel or region level.
We demonstrate that PIXELS delivers high-quality edits efficiently, leading to a notable improvement in quantitative metrics as well as human evaluation.
arXiv Detail & Related papers (2025-01-16T20:26:30Z) - Dense-Face: Personalized Face Generation Model via Dense Annotation Prediction [12.938413724185388]
We propose a new T2I personalization diffusion model, Dense-Face, which can generate face images with a consistent identity as the given reference subject.
Our method achieves state-of-the-art or competitive generation performance in image-text alignment, identity preservation, and pose control.
arXiv Detail & Related papers (2024-12-24T04:05:21Z) - JeDi: Joint-Image Diffusion Models for Finetuning-Free Personalized Text-to-Image Generation [49.997839600988875]
Existing personalization methods rely on finetuning a text-to-image foundation model on a user's custom dataset.
We propose Joint-Image Diffusion (jedi), an effective technique for learning a finetuning-free personalization model.
Our model achieves state-of-the-art generation quality, both quantitatively and qualitatively, significantly outperforming both the prior finetuning-based and finetuning-free personalization baselines.
arXiv Detail & Related papers (2024-07-08T17:59:02Z) - A Survey of Multimodal-Guided Image Editing with Text-to-Image Diffusion Models [117.77807994397784]
Image editing aims to edit the given synthetic or real image to meet the specific requirements from users.
Recent significant advancement in this field is based on the development of text-to-image (T2I) diffusion models.
T2I-based image editing methods significantly enhance editing performance and offer a user-friendly interface for modifying content guided by multimodal inputs.
arXiv Detail & Related papers (2024-06-20T17:58:52Z) - Preserving Identity with Variational Score for General-purpose 3D Editing [48.314327790451856]
Piva is a novel optimization-based method for editing images and 3D models based on diffusion models.
We pinpoint the limitations in 2D and 3D editing, which causes detail loss and oversaturation.
We propose an additional score distillation term that enforces identity preservation.
arXiv Detail & Related papers (2024-06-13T09:32:40Z) - Editing Massive Concepts in Text-to-Image Diffusion Models [58.620118104364174]
We propose a two-stage method, Editing Massive Concepts In Diffusion Models (EMCID)
The first stage performs memory optimization for each individual concept with dual self-distillation from text alignment loss and diffusion noise prediction loss.
The second stage conducts massive concept editing with multi-layer, closed form model editing.
arXiv Detail & Related papers (2024-03-20T17:59:57Z) - Custom-Edit: Text-Guided Image Editing with Customized Diffusion Models [26.92450293675906]
Text-to-image diffusion models can generate diverse, high-fidelity images based on user-provided text prompts.
We propose Custom-Edit, in which we (i) customize a diffusion model with a few reference images and then (ii) perform text-guided editing.
arXiv Detail & Related papers (2023-05-25T06:46:28Z) - ReGeneration Learning of Diffusion Models with Rich Prompts for
Zero-Shot Image Translation [8.803251014279502]
Large-scale text-to-image models have demonstrated amazing ability to synthesize diverse and high-fidelity images.
Current models can impose significant changes to the original image content during the editing process.
We propose ReGeneration learning in an image-to-image Diffusion model (ReDiffuser)
arXiv Detail & Related papers (2023-05-08T12:08:12Z) - Zero-shot Image-to-Image Translation [57.46189236379433]
We propose pix2pix-zero, an image-to-image translation method that can preserve the original image without manual prompting.
We propose cross-attention guidance, which aims to retain the cross-attention maps of the input image throughout the diffusion process.
Our method does not need additional training for these edits and can directly use the existing text-to-image diffusion model.
arXiv Detail & Related papers (2023-02-06T18:59:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.