CogView3: Finer and Faster Text-to-Image Generation via Relay Diffusion
- URL: http://arxiv.org/abs/2403.05121v1
- Date: Fri, 8 Mar 2024 07:32:50 GMT
- Title: CogView3: Finer and Faster Text-to-Image Generation via Relay Diffusion
- Authors: Wendi Zheng, Jiayan Teng, Zhuoyi Yang, Weihan Wang, Jidong Chen,
Xiaotao Gu, Yuxiao Dong, Ming Ding, Jie Tang
- Abstract summary: We propose CogView3, an innovative cascaded framework that enhances the performance of text-to-image diffusion.
CogView3 is the first model implementing relay diffusion in the realm of text-to-image generation.
Our experimental results demonstrate that CogView3 outperforms SDXL, the current state-of-the-art open-source text-to-image diffusion model, by 77.0% in human evaluations.
- Score: 36.95767748269613
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in text-to-image generative systems have been largely
driven by diffusion models. However, single-stage text-to-image diffusion
models still face challenges, in terms of computational efficiency and the
refinement of image details. To tackle the issue, we propose CogView3, an
innovative cascaded framework that enhances the performance of text-to-image
diffusion. CogView3 is the first model implementing relay diffusion in the
realm of text-to-image generation, executing the task by first creating
low-resolution images and subsequently applying relay-based super-resolution.
This methodology not only results in competitive text-to-image outputs but also
greatly reduces both training and inference costs. Our experimental results
demonstrate that CogView3 outperforms SDXL, the current state-of-the-art
open-source text-to-image diffusion model, by 77.0\% in human evaluations, all
while requiring only about 1/2 of the inference time. The distilled variant of
CogView3 achieves comparable performance while only utilizing 1/10 of the
inference time by SDXL.
Related papers
- Fast constrained sampling in pre-trained diffusion models [77.21486516041391]
Diffusion models have dominated the field of large, generative image models.
We propose an algorithm for fast-constrained sampling in large pre-trained diffusion models.
arXiv Detail & Related papers (2024-10-24T14:52:38Z) - OrientDream: Streamlining Text-to-3D Generation with Explicit Orientation Control [66.03885917320189]
OrientDream is a camera orientation conditioned framework for efficient and multi-view consistent 3D generation from textual prompts.
Our strategy emphasizes the implementation of an explicit camera orientation conditioned feature in the pre-training of a 2D text-to-image diffusion module.
Our experiments reveal that our method not only produces high-quality NeRF models with consistent multi-view properties but also achieves an optimization speed significantly greater than existing methods.
arXiv Detail & Related papers (2024-06-14T13:16:18Z) - Text-Image Conditioned Diffusion for Consistent Text-to-3D Generation [28.079441901818296]
We propose a text-to-3D method for Neural Radiance Fields (NeRFs) that explicitly enforces fine-grained view consistency.
Our method achieves state-of-the-art performance over existing text-to-3D methods.
arXiv Detail & Related papers (2023-12-19T01:09:49Z) - SwiftBrush: One-Step Text-to-Image Diffusion Model with Variational Score Distillation [1.5892730797514436]
Text-to-image diffusion models often suffer from slow iterative sampling processes.
We present a novel image-free distillation scheme named $textbfSwiftBrush$.
SwiftBrush achieves an FID score of $textbf16.67$ and a CLIP score of $textbf0.29$ on the COCO-30K benchmark.
arXiv Detail & Related papers (2023-12-08T18:44:09Z) - SnapFusion: Text-to-Image Diffusion Model on Mobile Devices within Two
Seconds [88.06788636008051]
Text-to-image diffusion models can create stunning images from natural language descriptions that rival the work of professional artists and photographers.
These models are large, with complex network architectures and tens of denoising iterations, making them computationally expensive and slow to run.
We present a generic approach that unlocks running text-to-image diffusion models on mobile devices in less than $2$ seconds.
arXiv Detail & Related papers (2023-06-01T17:59:25Z) - Wuerstchen: An Efficient Architecture for Large-Scale Text-to-Image
Diffusion Models [6.821399706256863]
W"urstchen is a novel architecture for text-to-image synthesis that combines competitive performance with unprecedented cost-effectiveness.
A key contribution of our work is to develop a latent diffusion technique in which we learn a detailed but extremely compact semantic image representation.
arXiv Detail & Related papers (2023-06-01T13:00:53Z) - Shifted Diffusion for Text-to-image Generation [65.53758187995744]
Corgi is based on our proposed shifted diffusion model, which achieves better image embedding generation from input text.
Corgi also achieves new state-of-the-art results across different datasets on downstream language-free text-to-image generation tasks.
arXiv Detail & Related papers (2022-11-24T03:25:04Z) - eDiffi: Text-to-Image Diffusion Models with an Ensemble of Expert
Denoisers [87.52504764677226]
Large-scale diffusion-based generative models have led to breakthroughs in text-conditioned high-resolution image synthesis.
We train an ensemble of text-to-image diffusion models specialized for different stages synthesis.
Our ensemble of diffusion models, called eDiffi, results in improved text alignment while maintaining the same inference cost.
arXiv Detail & Related papers (2022-11-02T17:43:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.