ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models
- URL: http://arxiv.org/abs/2403.01807v1
- Date: Mon, 4 Mar 2024 07:57:05 GMT
- Title: ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models
- Authors: Lukas H\"ollein, Alja\v{z} Bo\v{z}i\v{c}, Norman M\"uller, David
Novotny, Hung-Yu Tseng, Christian Richardt, Michael Zollh\"ofer, Matthias
Nie{\ss}ner
- Abstract summary: We present a method that learns to generate multi-view images in a single denoising process from real-world data.
We design an autoregressive generation that renders more 3D-consistent images at any viewpoint.
- Score: 13.551691697814908
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D asset generation is getting massive amounts of attention, inspired by the
recent success of text-guided 2D content creation. Existing text-to-3D methods
use pretrained text-to-image diffusion models in an optimization problem or
fine-tune them on synthetic data, which often results in non-photorealistic 3D
objects without backgrounds. In this paper, we present a method that leverages
pretrained text-to-image models as a prior, and learn to generate multi-view
images in a single denoising process from real-world data. Concretely, we
propose to integrate 3D volume-rendering and cross-frame-attention layers into
each block of the existing U-Net network of the text-to-image model. Moreover,
we design an autoregressive generation that renders more 3D-consistent images
at any viewpoint. We train our model on real-world datasets of objects and
showcase its capabilities to generate instances with a variety of high-quality
shapes and textures in authentic surroundings. Compared to the existing
methods, the results generated by our method are consistent, and have favorable
visual quality (-30% FID, -37% KID).
Related papers
- Bootstrap3D: Improving 3D Content Creation with Synthetic Data [80.92268916571712]
A critical bottleneck is the scarcity of high-quality 3D assets with detailed captions.
We propose Bootstrap3D, a novel framework that automatically generates an arbitrary quantity of multi-view images.
We have generated 1 million high-quality synthetic multi-view images with dense descriptive captions.
arXiv Detail & Related papers (2024-05-31T17:59:56Z) - 3D-SceneDreamer: Text-Driven 3D-Consistent Scene Generation [51.64796781728106]
We propose a generative refinement network to synthesize new contents with higher quality by exploiting the natural image prior to 2D diffusion model and the global 3D information of the current scene.
Our approach supports wide variety of scene generation and arbitrary camera trajectories with improved visual quality and 3D consistency.
arXiv Detail & Related papers (2024-03-14T14:31:22Z) - Geometry aware 3D generation from in-the-wild images in ImageNet [18.157263188192434]
We propose a method for reconstructing 3D geometry from diverse and unstructured Imagenet dataset without camera pose information.
We use an efficient triplane representation to learn 3D models from 2D images and modify the architecture of the generator backbone based on StyleGAN2.
The trained generator can produce class-conditional 3D models as well as renderings from arbitrary viewpoints.
arXiv Detail & Related papers (2024-01-31T23:06:39Z) - IT3D: Improved Text-to-3D Generation with Explicit View Synthesis [71.68595192524843]
This study presents a novel strategy that leverages explicitly synthesized multi-view images to address these issues.
Our approach involves the utilization of image-to-image pipelines, empowered by LDMs, to generate posed high-quality images.
For the incorporated discriminator, the synthesized multi-view images are considered real data, while the renderings of the optimized 3D models function as fake data.
arXiv Detail & Related papers (2023-08-22T14:39:17Z) - Guide3D: Create 3D Avatars from Text and Image Guidance [55.71306021041785]
Guide3D is a text-and-image-guided generative model for 3D avatar generation based on diffusion models.
Our framework produces topologically and structurally correct geometry and high-resolution textures.
arXiv Detail & Related papers (2023-08-18T17:55:47Z) - ARTIC3D: Learning Robust Articulated 3D Shapes from Noisy Web Image
Collections [71.46546520120162]
Estimating 3D articulated shapes like animal bodies from monocular images is inherently challenging.
We propose ARTIC3D, a self-supervised framework to reconstruct per-instance 3D shapes from a sparse image collection in-the-wild.
We produce realistic animations by fine-tuning the rendered shape and texture under rigid part transformations.
arXiv Detail & Related papers (2023-06-07T17:47:50Z) - Fantasia3D: Disentangling Geometry and Appearance for High-quality
Text-to-3D Content Creation [45.69270771487455]
We propose a new method of Fantasia3D for high-quality text-to-3D content creation.
Key to Fantasia3D is the disentangled modeling and learning of geometry and appearance.
Our framework is more compatible with popular graphics engines, supporting relighting, editing, and physical simulation of the generated 3D assets.
arXiv Detail & Related papers (2023-03-24T09:30:09Z) - 3D-TOGO: Towards Text-Guided Cross-Category 3D Object Generation [107.46972849241168]
3D-TOGO model generates 3D objects in the form of the neural radiance field with good texture.
Experiments on the largest 3D object dataset (i.e., ABO) are conducted to verify that 3D-TOGO can better generate high-quality 3D objects.
arXiv Detail & Related papers (2022-12-02T11:31:49Z) - Leveraging 2D Data to Learn Textured 3D Mesh Generation [33.32377849866736]
We present the first generative model of textured 3D meshes.
We train our model to explain a distribution of images by modelling each image as a 3D foreground object.
It learns to generate meshes that when rendered, produce images similar to those in its training set.
arXiv Detail & Related papers (2020-04-08T18:00:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.