ControlDreamer: Stylized 3D Generation with Multi-View ControlNet
- URL: http://arxiv.org/abs/2312.01129v2
- Date: Fri, 5 Jan 2024 05:03:27 GMT
- Title: ControlDreamer: Stylized 3D Generation with Multi-View ControlNet
- Authors: Yeongtak Oh, Jooyoung Choi, Yongsung Kim, Minjun Park, Chaehun Shin,
and Sungroh Yoon
- Abstract summary: We introduce multi-view ControlNet, a novel depth-aware multi-view diffusion model trained on datasets from a carefully curated text corpus.
Our multi-view ControlNet is then integrated into our two-stage pipeline, ControlDreamer, enabling text-guided generation of stylized 3D models.
- Score: 34.92628800597151
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in text-to-3D generation have significantly contributed
to the automation and democratization of 3D content creation. Building upon
these developments, we aim to address the limitations of current methods in
generating 3D models with creative geometry and styles. We introduce multi-view
ControlNet, a novel depth-aware multi-view diffusion model trained on generated
datasets from a carefully curated text corpus. Our multi-view ControlNet is
then integrated into our two-stage pipeline, ControlDreamer, enabling
text-guided generation of stylized 3D models. Additionally, we present a
comprehensive benchmark for 3D style editing, encompassing a broad range of
subjects, including objects, animals, and characters, to further facilitate
research on diverse 3D generation. Our comparative analysis reveals that this
new pipeline outperforms existing text-to-3D methods as evidenced by human
evaluations and CLIP score metrics.
Related papers
- A Survey On Text-to-3D Contents Generation In The Wild [5.875257756382124]
3D content creation plays a vital role in various applications, such as gaming, robotics simulation, and virtual reality.
To address this challenge, text-to-3D generation technologies have emerged as a promising solution for automating 3D creation.
arXiv Detail & Related papers (2024-05-15T15:23:22Z) - Coin3D: Controllable and Interactive 3D Assets Generation with Proxy-Guided Conditioning [52.81032340916171]
Coin3D allows users to control the 3D generation using a coarse geometry proxy assembled from basic shapes.
Our method achieves superior controllability and flexibility in the 3D assets generation task.
arXiv Detail & Related papers (2024-05-13T17:56:13Z) - 3D-SceneDreamer: Text-Driven 3D-Consistent Scene Generation [51.64796781728106]
We propose a generative refinement network to synthesize new contents with higher quality by exploiting the natural image prior to 2D diffusion model and the global 3D information of the current scene.
Our approach supports wide variety of scene generation and arbitrary camera trajectories with improved visual quality and 3D consistency.
arXiv Detail & Related papers (2024-03-14T14:31:22Z) - VolumeDiffusion: Flexible Text-to-3D Generation with Efficient Volumetric Encoder [56.59814904526965]
This paper introduces a pioneering 3D encoder designed for text-to-3D generation.
A lightweight network is developed to efficiently acquire feature volumes from multi-view images.
The 3D volumes are then trained on a diffusion model for text-to-3D generation using a 3D U-Net.
arXiv Detail & Related papers (2023-12-18T18:59:05Z) - LucidDreaming: Controllable Object-Centric 3D Generation [11.965998779054079]
We present LucidDreaming as an effective pipeline capable of fine-grained control over 3D generation.
It requires only minimal input of 3D bounding boxes, which can be deduced from a simple text prompt.
We show that our method exhibits remarkable adaptability across a spectrum of mainstream Score Distillation Sampling-based 3D generation frameworks.
arXiv Detail & Related papers (2023-11-30T18:55:23Z) - Control3D: Towards Controllable Text-to-3D Generation [107.81136630589263]
We present a text-to-3D generation conditioning on the additional hand-drawn sketch, namely Control3D.
A 2D conditioned diffusion model (ControlNet) is remoulded to guide the learning of 3D scene parameterized as NeRF.
We exploit a pre-trained differentiable photo-to-sketch model to directly estimate the sketch of the rendered image over synthetic 3D scene.
arXiv Detail & Related papers (2023-11-09T15:50:32Z) - T$^3$Bench: Benchmarking Current Progress in Text-to-3D Generation [52.029698642883226]
Methods in text-to-3D leverage powerful pretrained diffusion models to optimize NeRF.
Most studies evaluate their results with subjective case studies and user experiments.
We introduce T$3$Bench, the first comprehensive text-to-3D benchmark.
arXiv Detail & Related papers (2023-10-04T17:12:18Z) - Guide3D: Create 3D Avatars from Text and Image Guidance [55.71306021041785]
Guide3D is a text-and-image-guided generative model for 3D avatar generation based on diffusion models.
Our framework produces topologically and structurally correct geometry and high-resolution textures.
arXiv Detail & Related papers (2023-08-18T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.