Animate124: Animating One Image to 4D Dynamic Scene
- URL: http://arxiv.org/abs/2311.14603v2
- Date: Mon, 19 Feb 2024 02:30:14 GMT
- Title: Animate124: Animating One Image to 4D Dynamic Scene
- Authors: Yuyang Zhao, Zhiwen Yan, Enze Xie, Lanqing Hong, Zhenguo Li, Gim Hee
Lee
- Abstract summary: Animate124 is the first work to animate a single in-the-wild image into 3D video through textual motion descriptions.
Our method demonstrates significant advancements over existing baselines.
- Score: 108.17635645216214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce Animate124 (Animate-one-image-to-4D), the first work to animate
a single in-the-wild image into 3D video through textual motion descriptions,
an underexplored problem with significant applications. Our 4D generation
leverages an advanced 4D grid dynamic Neural Radiance Field (NeRF) model,
optimized in three distinct stages using multiple diffusion priors. Initially,
a static model is optimized using the reference image, guided by 2D and 3D
diffusion priors, which serves as the initialization for the dynamic NeRF.
Subsequently, a video diffusion model is employed to learn the motion specific
to the subject. However, the object in the 3D videos tends to drift away from
the reference image over time. This drift is mainly due to the misalignment
between the text prompt and the reference image in the video diffusion model.
In the final stage, a personalized diffusion prior is therefore utilized to
address the semantic drift. As the pioneering image-text-to-4D generation
framework, our method demonstrates significant advancements over existing
baselines, evidenced by comprehensive quantitative and qualitative assessments.
Related papers
- 4Dynamic: Text-to-4D Generation with Hybrid Priors [56.918589589853184]
We propose a novel method for text-to-4D generation, which ensures the dynamic amplitude and authenticity through direct supervision provided by a video prior.
Our method not only supports text-to-4D generation but also enables 4D generation from monocular videos.
arXiv Detail & Related papers (2024-07-17T16:02:55Z) - Animate3D: Animating Any 3D Model with Multi-view Video Diffusion [47.05131487114018]
Animate3D is a novel framework for animating any static 3D model.
We introduce a framework combining reconstruction and 4D Score Distillation Sampling (4D-SDS) to leverage the multi-view video diffusion priors for animating 3D objects.
arXiv Detail & Related papers (2024-07-16T05:35:57Z) - MotionDreamer: Exploring Semantic Video Diffusion features for Zero-Shot 3D Mesh Animation [10.263762787854862]
We propose a technique for automatic re-animation of various 3D shapes based on a motion prior extracted from a video diffusion model.
We leverage an explicit mesh-based representation compatible with existing computer-graphics pipelines.
Our time-efficient zero-shot method achieves a superior performance re-animating a diverse set of 3D shapes.
arXiv Detail & Related papers (2024-05-30T15:30:38Z) - Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models [116.31344506738816]
We present a novel framework, textbfDiffusion4D, for efficient and scalable 4D content generation.
We develop a 4D-aware video diffusion model capable of synthesizing orbital views of dynamic 3D assets.
Our method surpasses prior state-of-the-art techniques in terms of generation efficiency and 4D geometry consistency.
arXiv Detail & Related papers (2024-05-26T17:47:34Z) - Align Your Gaussians: Text-to-4D with Dynamic 3D Gaussians and Composed
Diffusion Models [94.07744207257653]
We focus on the underexplored text-to-4D setting and synthesize dynamic, animated 3D objects.
We combine text-to-image, text-to-video, and 3D-aware multiview diffusion models to provide feedback during 4D object optimization.
arXiv Detail & Related papers (2023-12-21T11:41:02Z) - A Unified Approach for Text- and Image-guided 4D Scene Generation [58.658768832653834]
We propose Dream-in-4D, which features a novel two-stage approach for text-to-4D synthesis.
We show that our approach significantly advances image and motion quality, 3D consistency and text fidelity for text-to-4D generation.
Our method offers, for the first time, a unified approach for text-to-4D, image-to-4D and personalized 4D generation tasks.
arXiv Detail & Related papers (2023-11-28T15:03:53Z) - Text-To-4D Dynamic Scene Generation [111.89517759596345]
We present MAV3D (Make-A-Video3D), a method for generating three-dimensional dynamic scenes from text descriptions.
Our approach uses a 4D dynamic Neural Radiance Field (NeRF), which is optimized for scene appearance, density, and motion consistency.
The dynamic video output generated from the provided text can be viewed from any camera location and angle, and can be composited into any 3D environment.
arXiv Detail & Related papers (2023-01-26T18:14:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.