Text2Control3D: Controllable 3D Avatar Generation in Neural Radiance
Fields using Geometry-Guided Text-to-Image Diffusion Model
- URL: http://arxiv.org/abs/2309.03550v1
- Date: Thu, 7 Sep 2023 08:14:46 GMT
- Title: Text2Control3D: Controllable 3D Avatar Generation in Neural Radiance
Fields using Geometry-Guided Text-to-Image Diffusion Model
- Authors: Sungwon Hwang, Junha Hyung, Jaegul Choo
- Abstract summary: We propose a controllable text-to-3D avatar generation method whose facial expression is controllable.
Our main strategy is to construct the 3D avatar in Neural Radiance Fields (NeRF) optimized with a set of controlled viewpoint-aware images.
We demonstrate the empirical results and discuss the effectiveness of our method.
- Score: 39.64952340472541
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent advances in diffusion models such as ControlNet have enabled
geometrically controllable, high-fidelity text-to-image generation. However,
none of them addresses the question of adding such controllability to
text-to-3D generation. In response, we propose Text2Control3D, a controllable
text-to-3D avatar generation method whose facial expression is controllable
given a monocular video casually captured with hand-held camera. Our main
strategy is to construct the 3D avatar in Neural Radiance Fields (NeRF)
optimized with a set of controlled viewpoint-aware images that we generate from
ControlNet, whose condition input is the depth map extracted from the input
video. When generating the viewpoint-aware images, we utilize cross-reference
attention to inject well-controlled, referential facial expression and
appearance via cross attention. We also conduct low-pass filtering of Gaussian
latent of the diffusion model in order to ameliorate the viewpoint-agnostic
texture problem we observed from our empirical analysis, where the
viewpoint-aware images contain identical textures on identical pixel positions
that are incomprehensible in 3D. Finally, to train NeRF with the images that
are viewpoint-aware yet are not strictly consistent in geometry, our approach
considers per-image geometric variation as a view of deformation from a shared
3D canonical space. Consequently, we construct the 3D avatar in a canonical
space of deformable NeRF by learning a set of per-image deformation via
deformation field table. We demonstrate the empirical results and discuss the
effectiveness of our method.
Related papers
- Controllable Text-to-3D Generation via Surface-Aligned Gaussian Splatting [9.383423119196408]
We introduce Multi-view ControlNet (MVControl), a novel neural network architecture designed to enhance existing multi-view diffusion models.
MVControl is able to offer 3D diffusion guidance for optimization-based 3D generation.
In pursuit of efficiency, we adopt 3D Gaussians as our representation instead of the commonly used implicit representations.
arXiv Detail & Related papers (2024-03-15T02:57:20Z) - Points-to-3D: Bridging the Gap between Sparse Points and
Shape-Controllable Text-to-3D Generation [16.232803881159022]
We propose a flexible framework of Points-to-3D to bridge the gap between sparse yet freely available 3D points and realistic shape-controllable 3D generation.
The core idea of Points-to-3D is to introduce controllable sparse 3D points to guide the text-to-3D generation.
arXiv Detail & Related papers (2023-07-26T02:16:55Z) - Magic123: One Image to High-Quality 3D Object Generation Using Both 2D
and 3D Diffusion Priors [104.79392615848109]
We present Magic123, a two-stage coarse-to-fine approach for high-quality, textured 3D meshes from a single unposed image.
In the first stage, we optimize a neural radiance field to produce a coarse geometry.
In the second stage, we adopt a memory-efficient differentiable mesh representation to yield a high-resolution mesh with a visually appealing texture.
arXiv Detail & Related papers (2023-06-30T17:59:08Z) - GazeNeRF: 3D-Aware Gaze Redirection with Neural Radiance Fields [100.53114092627577]
Existing gaze redirection methods operate on 2D images and struggle to generate 3D consistent results.
We build on the intuition that the face region and eyeballs are separate 3D structures that move in a coordinated yet independent fashion.
arXiv Detail & Related papers (2022-12-08T13:19:11Z) - CGOF++: Controllable 3D Face Synthesis with Conditional Generative
Occupancy Fields [52.14985242487535]
We propose a new conditional 3D face synthesis framework, which enables 3D controllability over generated face images.
At its core is a conditional Generative Occupancy Field (cGOF++) that effectively enforces the shape of the generated face to conform to a given 3D Morphable Model (3DMM) mesh.
Experiments validate the effectiveness of the proposed method and show more precise 3D controllability than state-of-the-art 2D-based controllable face synthesis methods.
arXiv Detail & Related papers (2022-11-23T19:02:50Z) - Next3D: Generative Neural Texture Rasterization for 3D-Aware Head
Avatars [36.4402388864691]
3D-aware generative adversarial networks (GANs) synthesize high-fidelity and multi-view-consistent facial images using only collections of single-view 2D imagery.
Recent efforts incorporate 3D Morphable Face Model (3DMM) to describe deformation in generative radiance fields either explicitly or implicitly.
We propose a novel 3D GAN framework for unsupervised learning of generative, high-quality and 3D-consistent facial avatars from unstructured 2D images.
arXiv Detail & Related papers (2022-11-21T06:40:46Z) - Controllable 3D Face Synthesis with Conditional Generative Occupancy
Fields [40.2714783162419]
We propose a new conditional 3D face synthesis framework, which enables 3D controllability over generated face images.
At its core is a conditional Generative Occupancy Field (cGOF) that effectively enforces the shape of the generated face to commit to a given 3D Morphable Model (3DMM) mesh.
Experiments validate the effectiveness of the proposed method, which is able to generate high-fidelity face images.
arXiv Detail & Related papers (2022-06-16T17:58:42Z) - Disentangled3D: Learning a 3D Generative Model with Disentangled
Geometry and Appearance from Monocular Images [94.49117671450531]
State-of-the-art 3D generative models are GANs which use neural 3D volumetric representations for synthesis.
In this paper, we design a 3D GAN which can learn a disentangled model of objects, just from monocular observations.
arXiv Detail & Related papers (2022-03-29T22:03:18Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - 3D-GIF: 3D-Controllable Object Generation via Implicit Factorized
Representations [31.095503715696722]
We propose the factorized representations which are view-independent and light-disentangled, and training schemes with randomly sampled light conditions.
We demonstrate the superiority of our method by visualizing factorized representations, re-lighted images, and albedo-textured meshes.
This is the first work that extracts albedo-textured meshes with unposed 2D images without any additional labels or assumptions.
arXiv Detail & Related papers (2022-03-12T15:23:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.