ControlCom: Controllable Image Composition using Diffusion Model
- URL: http://arxiv.org/abs/2308.10040v1
- Date: Sat, 19 Aug 2023 14:56:44 GMT
- Title: ControlCom: Controllable Image Composition using Diffusion Model
- Authors: Bo Zhang, Yuxuan Duan, Jun Lan, Yan Hong, Huijia Zhu, Weiqiang Wang,
Li Niu
- Abstract summary: We propose a controllable image composition method that unifies four tasks in one diffusion model.
We also propose a local enhancement module to enhance the foreground details in the diffusion model.
The proposed method is evaluated on both public benchmark and real-world data.
- Score: 45.48263800282992
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image composition targets at synthesizing a realistic composite image from a
pair of foreground and background images. Recently, generative composition
methods are built on large pretrained diffusion models to generate composite
images, considering their great potential in image generation. However, they
suffer from lack of controllability on foreground attributes and poor
preservation of foreground identity. To address these challenges, we propose a
controllable image composition method that unifies four tasks in one diffusion
model: image blending, image harmonization, view synthesis, and generative
composition. Meanwhile, we design a self-supervised training framework coupled
with a tailored pipeline of training data preparation. Moreover, we propose a
local enhancement module to enhance the foreground details in the diffusion
model, improving the foreground fidelity of composite images. The proposed
method is evaluated on both public benchmark and real-world data, which
demonstrates that our method can generate more faithful and controllable
composite images than existing approaches. The code and model will be available
at https://github.com/bcmi/ControlCom-Image-Composition.
Related papers
- IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation [70.8833857249951]
IterComp is a novel framework that aggregates composition-aware model preferences from multiple models.
We propose an iterative feedback learning method to enhance compositionality in a closed-loop manner.
IterComp opens new research avenues in reward feedback learning for diffusion models and compositional generation.
arXiv Detail & Related papers (2024-10-09T17:59:13Z) - FreeCompose: Generic Zero-Shot Image Composition with Diffusion Prior [50.0535198082903]
We offer a novel approach to image composition, which integrates multiple input images into a single, coherent image.
We showcase the potential of utilizing the powerful generative prior inherent in large-scale pre-trained diffusion models to accomplish generic image composition.
arXiv Detail & Related papers (2024-07-06T03:35:43Z) - DiffPop: Plausibility-Guided Object Placement Diffusion for Image Composition [13.341996441742374]
DiffPop is a framework that learns the scale and spatial relations among multiple objects and the corresponding scene image.
We develop a human-in-the-loop pipeline which exploits human labeling on the diffusion-generated composite images.
Our dataset and code will be released.
arXiv Detail & Related papers (2024-06-12T03:40:17Z) - DiffHarmony: Latent Diffusion Model Meets Image Harmonization [11.500358677234939]
Diffusion models have promoted the rapid development of image-to-image translation tasks.
Fine-tuning pre-trained latent diffusion models from scratch is computationally intensive.
In this paper, we adapt a pre-trained latent diffusion model to the image harmonization task to generate harmonious but potentially blurry initial images.
arXiv Detail & Related papers (2024-04-09T09:05:23Z) - TF-ICON: Diffusion-Based Training-Free Cross-Domain Image Composition [13.087647740473205]
TF-ICON is a framework that harnesses the power of text-driven diffusion models for cross-domain image-guided composition.
TF-ICON can leverage off-the-shelf diffusion models to perform cross-domain image-guided composition without requiring additional training, finetuning, or optimization.
Our experiments show that equipping Stable Diffusion with the exceptional prompt outperforms state-of-the-art inversion methods on various datasets.
arXiv Detail & Related papers (2023-07-24T02:50:44Z) - Cross-domain Compositing with Pretrained Diffusion Models [34.98199766006208]
We employ a localized, iterative refinement scheme which infuses the injected objects with contextual information derived from the background scene.
Our method produces higher quality and realistic results without requiring any annotations or training.
arXiv Detail & Related papers (2023-02-20T18:54:04Z) - Person Image Synthesis via Denoising Diffusion Model [116.34633988927429]
We show how denoising diffusion models can be applied for high-fidelity person image synthesis.
Our results on two large-scale benchmarks and a user study demonstrate the photorealism of our proposed approach under challenging scenarios.
arXiv Detail & Related papers (2022-11-22T18:59:50Z) - Compositional Visual Generation with Composable Diffusion Models [80.75258849913574]
We propose an alternative structured approach for compositional generation using diffusion models.
An image is generated by composing a set of diffusion models, with each of them modeling a certain component of the image.
The proposed method can generate scenes at test time that are substantially more complex than those seen in training.
arXiv Detail & Related papers (2022-06-03T17:47:04Z) - DiVAE: Photorealistic Images Synthesis with Denoising Diffusion Decoder [73.1010640692609]
We propose a VQ-VAE architecture model with a diffusion decoder (DiVAE) to work as the reconstructing component in image synthesis.
Our model achieves state-of-the-art results and generates more photorealistic images specifically.
arXiv Detail & Related papers (2022-06-01T10:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.