Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models
- URL: http://arxiv.org/abs/2305.16322v3
- Date: Sun, 29 Oct 2023 15:59:24 GMT
- Title: Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models
- Authors: Shihao Zhao and Dongdong Chen and Yen-Chun Chen and Jianmin Bao and
Shaozhe Hao and Lu Yuan and Kwan-Yee K. Wong
- Abstract summary: We introduce Uni-ControlNet, a unified framework that allows for the simultaneous utilization of different local controls and global controls.
Unlike existing methods, Uni-ControlNet only requires the fine-tuning of two additional adapters upon frozen pre-trained text-to-image diffusion models.
Uni-ControlNet demonstrates its superiority over existing methods in terms of controllability, generation quality and composability.
- Score: 82.19740045010435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text-to-Image diffusion models have made tremendous progress over the past
two years, enabling the generation of highly realistic images based on
open-domain text descriptions. However, despite their success, text
descriptions often struggle to adequately convey detailed controls, even when
composed of long and complex texts. Moreover, recent studies have also shown
that these models face challenges in understanding such complex texts and
generating the corresponding images. Therefore, there is a growing need to
enable more control modes beyond text description. In this paper, we introduce
Uni-ControlNet, a unified framework that allows for the simultaneous
utilization of different local controls (e.g., edge maps, depth map,
segmentation masks) and global controls (e.g., CLIP image embeddings) in a
flexible and composable manner within one single model. Unlike existing
methods, Uni-ControlNet only requires the fine-tuning of two additional
adapters upon frozen pre-trained text-to-image diffusion models, eliminating
the huge cost of training from scratch. Moreover, thanks to some dedicated
adapter designs, Uni-ControlNet only necessitates a constant number (i.e., 2)
of adapters, regardless of the number of local or global controls used. This
not only reduces the fine-tuning costs and model size, making it more suitable
for real-world deployment, but also facilitate composability of different
conditions. Through both quantitative and qualitative comparisons,
Uni-ControlNet demonstrates its superiority over existing methods in terms of
controllability, generation quality and composability. Code is available at
\url{https://github.com/ShihaoZhaoZSH/Uni-ControlNet}.
Related papers
- DC-ControlNet: Decoupling Inter- and Intra-Element Conditions in Image Generation with Diffusion Models [55.42794740244581]
We introduce DC (Decouple)-ControlNet, a framework for multi-condition image generation.
The core idea behind DC-ControlNet is to decouple control conditions, transforming global control into a hierarchical system.
For interactions between elements, we introduce the Inter-Element Controller, which accurately handles multi-element interactions.
arXiv Detail & Related papers (2025-02-20T18:01:02Z) - UNIC-Adapter: Unified Image-instruction Adapter with Multi-modal Transformer for Image Generation [64.8341372591993]
We propose a new approach to unify controllable generation within a single framework.
Specifically, we propose the unified image-instruction adapter (UNIC-Adapter) built on the Multi-Modal-Diffusion Transformer architecture.
Our UNIC-Adapter effectively extracts multi-modal instruction information by incorporating both conditional images and task instructions.
arXiv Detail & Related papers (2024-12-25T15:19:02Z) - AnyControl: Create Your Artwork with Versatile Control on Text-to-Image Generation [24.07613591217345]
Linguistic control enables effective content creation, but struggles with fine-grained control over image generation.
AnyControl develops a novel Multi-Control framework that extracts a unified multi-modal embedding to guide the generation process.
This approach enables a holistic understanding of user inputs, and produces high-quality, faithful results under versatile control signals.
arXiv Detail & Related papers (2024-06-27T07:40:59Z) - OmniControlNet: Dual-stage Integration for Conditional Image Generation [61.1432268643639]
We provide a two-way integration for the widely adopted ControlNet by integrating external condition generation algorithms into a single dense prediction method.
Our proposed OmniControlNet consolidates 1) the condition generation by a single multi-tasking dense prediction algorithm under the task embedding guidance and 2) the image generation process for different conditioning types under the textual embedding guidance.
arXiv Detail & Related papers (2024-06-09T18:03:47Z) - FlexEControl: Flexible and Efficient Multimodal Control for Text-to-Image Generation [99.4649330193233]
Controllable text-to-image (T2I) diffusion models generate images conditioned on both text prompts and semantic inputs of other modalities like edge maps.
We propose a novel Flexible and Efficient method, FlexEControl, for controllable T2I generation.
arXiv Detail & Related papers (2024-05-08T06:09:11Z) - UniControl: A Unified Diffusion Model for Controllable Visual Generation
In the Wild [166.25327094261038]
We introduce UniControl, a new generative foundation model for controllable condition-to-image (C2I) tasks.
UniControl consolidates a wide array of C2I tasks within a singular framework, while still allowing for arbitrary language prompts.
trained on nine unique C2I tasks, UniControl demonstrates impressive zero-shot generation abilities.
arXiv Detail & Related papers (2023-05-18T17:41:34Z) - Adding Conditional Control to Text-to-Image Diffusion Models [37.98427255384245]
We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models.
ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls.
arXiv Detail & Related papers (2023-02-10T23:12:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.