Align Your Gaussians: Text-to-4D with Dynamic 3D Gaussians and Composed
Diffusion Models
- URL: http://arxiv.org/abs/2312.13763v2
- Date: Wed, 3 Jan 2024 09:40:56 GMT
- Title: Align Your Gaussians: Text-to-4D with Dynamic 3D Gaussians and Composed
Diffusion Models
- Authors: Huan Ling, Seung Wook Kim, Antonio Torralba, Sanja Fidler, Karsten
Kreis
- Abstract summary: We focus on the underexplored text-to-4D setting and synthesize dynamic, animated 3D objects.
We combine text-to-image, text-to-video, and 3D-aware multiview diffusion models to provide feedback during 4D object optimization.
- Score: 94.07744207257653
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text-guided diffusion models have revolutionized image and video generation
and have also been successfully used for optimization-based 3D object
synthesis. Here, we instead focus on the underexplored text-to-4D setting and
synthesize dynamic, animated 3D objects using score distillation methods with
an additional temporal dimension. Compared to previous work, we pursue a novel
compositional generation-based approach, and combine text-to-image,
text-to-video, and 3D-aware multiview diffusion models to provide feedback
during 4D object optimization, thereby simultaneously enforcing temporal
consistency, high-quality visual appearance and realistic geometry. Our method,
called Align Your Gaussians (AYG), leverages dynamic 3D Gaussian Splatting with
deformation fields as 4D representation. Crucial to AYG is a novel method to
regularize the distribution of the moving 3D Gaussians and thereby stabilize
the optimization and induce motion. We also propose a motion amplification
mechanism as well as a new autoregressive synthesis scheme to generate and
combine multiple 4D sequences for longer generation. These techniques allow us
to synthesize vivid dynamic scenes, outperform previous work qualitatively and
quantitatively and achieve state-of-the-art text-to-4D performance. Due to the
Gaussian 4D representation, different 4D animations can be seamlessly combined,
as we demonstrate. AYG opens up promising avenues for animation, simulation and
digital content creation as well as synthetic data generation.
Related papers
- PLA4D: Pixel-Level Alignments for Text-to-4D Gaussian Splatting [26.382349137191547]
We propose text-to-video frames as explicit pixel alignment targets to generate static 3D objects and inject motion into them.
We develop Motion Alignment using a deformation network to drive changes in Gaussians and implement Reference Refinement for smooth 4D object surfaces.
Compared to previous methods, PLA4D produces synthesized outputs with better texture details in less time and effectively mitigates the Janus-faced problem.
arXiv Detail & Related papers (2024-05-30T11:23:01Z) - Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models [116.31344506738816]
We present a novel framework, textbfDiffusion4D, for efficient and scalable 4D content generation.
We develop a 4D-aware video diffusion model capable of synthesizing orbital views of dynamic 3D assets.
Our method surpasses prior state-of-the-art techniques in terms of generation efficiency and 4D geometry consistency.
arXiv Detail & Related papers (2024-05-26T17:47:34Z) - SC4D: Sparse-Controlled Video-to-4D Generation and Motion Transfer [57.506654943449796]
We propose an efficient, sparse-controlled video-to-4D framework named SC4D that decouples motion and appearance.
Our method surpasses existing methods in both quality and efficiency.
We devise a novel application that seamlessly transfers motion onto a diverse array of 4D entities.
arXiv Detail & Related papers (2024-04-04T18:05:18Z) - STAG4D: Spatial-Temporal Anchored Generative 4D Gaussians [36.83603109001298]
STAG4D is a novel framework that combines pre-trained diffusion models with dynamic 3D Gaussian splatting for high-fidelity 4D generation.
We show that our method outperforms prior 4D generation works in rendering quality, spatial-temporal consistency, and generation robustness.
arXiv Detail & Related papers (2024-03-22T04:16:33Z) - 4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency [118.15258850780417]
This work introduces 4DGen, a novel framework for grounded 4D content creation.
We identify static 3D assets and monocular video sequences as key components in constructing the 4D content.
Our pipeline facilitates conditional 4D generation, enabling users to specify geometry (3D assets) and motion (monocular videos)
arXiv Detail & Related papers (2023-12-28T18:53:39Z) - DreamGaussian4D: Generative 4D Gaussian Splatting [56.49043443452339]
We introduce DreamGaussian4D (DG4D), an efficient 4D generation framework that builds on Gaussian Splatting (GS)
Our key insight is that combining explicit modeling of spatial transformations with static GS makes an efficient and powerful representation for 4D generation.
Video generation methods have the potential to offer valuable spatial-temporal priors, enhancing the high-quality 4D generation.
arXiv Detail & Related papers (2023-12-28T17:16:44Z) - 4D-fy: Text-to-4D Generation Using Hybrid Score Distillation Sampling [91.99172731031206]
Current text-to-4D methods face a three-way tradeoff between quality of scene appearance, 3D structure, and motion.
We introduce hybrid score distillation sampling, an alternating optimization procedure that blends supervision signals from multiple pre-trained diffusion models.
arXiv Detail & Related papers (2023-11-29T18:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.