Text-To-4D Dynamic Scene Generation
- URL: http://arxiv.org/abs/2301.11280v1
- Date: Thu, 26 Jan 2023 18:14:32 GMT
- Title: Text-To-4D Dynamic Scene Generation
- Authors: Uriel Singer, Shelly Sheynin, Adam Polyak, Oron Ashual, Iurii Makarov,
Filippos Kokkinos, Naman Goyal, Andrea Vedaldi, Devi Parikh, Justin Johnson,
Yaniv Taigman
- Abstract summary: We present MAV3D (Make-A-Video3D), a method for generating three-dimensional dynamic scenes from text descriptions.
Our approach uses a 4D dynamic Neural Radiance Field (NeRF), which is optimized for scene appearance, density, and motion consistency.
The dynamic video output generated from the provided text can be viewed from any camera location and angle, and can be composited into any 3D environment.
- Score: 111.89517759596345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present MAV3D (Make-A-Video3D), a method for generating three-dimensional
dynamic scenes from text descriptions. Our approach uses a 4D dynamic Neural
Radiance Field (NeRF), which is optimized for scene appearance, density, and
motion consistency by querying a Text-to-Video (T2V) diffusion-based model. The
dynamic video output generated from the provided text can be viewed from any
camera location and angle, and can be composited into any 3D environment. MAV3D
does not require any 3D or 4D data and the T2V model is trained only on
Text-Image pairs and unlabeled videos. We demonstrate the effectiveness of our
approach using comprehensive quantitative and qualitative experiments and show
an improvement over previously established internal baselines. To the best of
our knowledge, our method is the first to generate 3D dynamic scenes given a
text description.
Related papers
- Tex4D: Zero-shot 4D Scene Texturing with Video Diffusion Models [54.35214051961381]
3D meshes are widely used in computer vision and graphics for their efficiency in animation and minimal memory use in movies, games, AR, and VR.
However, creating temporal consistent and realistic textures for mesh remains labor-intensive for professional artists.
We present 3D Tex sequences that integrates inherent geometry from mesh sequences with video diffusion models to produce consistent textures.
arXiv Detail & Related papers (2024-10-14T17:59:59Z) - VividDream: Generating 3D Scene with Ambient Dynamics [13.189732244489225]
We introduce VividDream, a method for generating explorable 4D scenes with ambient dynamics from a single input image or text prompt.
VividDream can provide human viewers with compelling 4D experiences generated based on diverse real images and text prompts.
arXiv Detail & Related papers (2024-05-30T17:59:24Z) - 4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency [118.15258850780417]
This work introduces 4DGen, a novel framework for grounded 4D content creation.
We identify static 3D assets and monocular video sequences as key components in constructing the 4D content.
Our pipeline facilitates conditional 4D generation, enabling users to specify geometry (3D assets) and motion (monocular videos)
arXiv Detail & Related papers (2023-12-28T18:53:39Z) - 4D-fy: Text-to-4D Generation Using Hybrid Score Distillation Sampling [91.99172731031206]
Current text-to-4D methods face a three-way tradeoff between quality of scene appearance, 3D structure, and motion.
We introduce hybrid score distillation sampling, an alternating optimization procedure that blends supervision signals from multiple pre-trained diffusion models.
arXiv Detail & Related papers (2023-11-29T18:58:05Z) - A Unified Approach for Text- and Image-guided 4D Scene Generation [58.658768832653834]
We propose Dream-in-4D, which features a novel two-stage approach for text-to-4D synthesis.
We show that our approach significantly advances image and motion quality, 3D consistency and text fidelity for text-to-4D generation.
Our method offers, for the first time, a unified approach for text-to-4D, image-to-4D and personalized 4D generation tasks.
arXiv Detail & Related papers (2023-11-28T15:03:53Z) - Animate124: Animating One Image to 4D Dynamic Scene [108.17635645216214]
Animate124 is the first work to animate a single in-the-wild image into 3D video through textual motion descriptions.
Our method demonstrates significant advancements over existing baselines.
arXiv Detail & Related papers (2023-11-24T16:47:05Z) - Text2NeRF: Text-Driven 3D Scene Generation with Neural Radiance Fields [29.907615852310204]
We present Text2NeRF, which is able to generate a wide range of 3D scenes purely from a text prompt.
Our method requires no additional training data but only a natural language description of the scene as the input.
arXiv Detail & Related papers (2023-05-19T10:58:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.