LC-NeRF: Local Controllable Face Generation in Neural Randiance Field
- URL: http://arxiv.org/abs/2302.09486v1
- Date: Sun, 19 Feb 2023 05:50:08 GMT
- Title: LC-NeRF: Local Controllable Face Generation in Neural Randiance Field
- Authors: Wenyang Zhou, Lu Yuan, Shuyu Chen, Lin Gao, Shimin Hu
- Abstract summary: LC-NeRF is composed of a Local Region Generators Module and a Spatial-Aware Fusion Module.
Our method provides better local editing than state-of-the-art face editing methods.
Our method also performs well in downstream tasks, such as text-driven facial image editing.
- Score: 55.54131820411912
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D face generation has achieved high visual quality and 3D consistency thanks
to the development of neural radiance fields (NeRF). Recently, to generate and
edit 3D faces with NeRF representation, some methods are proposed and achieve
good results in decoupling geometry and texture. The latent codes of these
generative models affect the whole face, and hence modifications to these codes
cause the entire face to change. However, users usually edit a local region
when editing faces and do not want other regions to be affected. Since changes
to the latent code affect global generation results, these methods do not allow
for fine-grained control of local facial regions. To improve local
controllability in NeRF-based face editing, we propose LC-NeRF, which is
composed of a Local Region Generators Module and a Spatial-Aware Fusion Module,
allowing for local geometry and texture control of local facial regions.
Qualitative and quantitative evaluations show that our method provides better
local editing than state-of-the-art face editing methods. Our method also
performs well in downstream tasks, such as text-driven facial image editing.
Related papers
- Revealing Directions for Text-guided 3D Face Editing [52.85632020601518]
3D face editing is a significant task in multimedia, aimed at the manipulation of 3D face models across various control signals.
We present Face Clan, a text-general approach for generating and manipulating 3D faces based on arbitrary attribute descriptions.
Our method offers a precisely controllable manipulation method, allowing users to intuitively customize regions of interest with the text description.
arXiv Detail & Related papers (2024-10-07T12:04:39Z) - ShapeFusion: A 3D diffusion model for localized shape editing [37.82690898932135]
We propose an effective diffusion masking training strategy that, by design, facilitates localized manipulation of any shape region.
Compared to the current state-of-the-art our method leads to more interpretable shape manipulations than methods relying on latent code state.
arXiv Detail & Related papers (2024-03-28T18:50:19Z) - MaTe3D: Mask-guided Text-based 3D-aware Portrait Editing [61.014328598895524]
We propose textbfMaTe3D: mask-guided text-based 3D-aware portrait editing.
New SDF-based 3D generator learns local and global representations with proposed SDF and density consistency losses.
Conditional Distillation on Geometry and Texture (CDGT) mitigates visual ambiguity and avoids mismatch between texture and geometry.
arXiv Detail & Related papers (2023-12-12T03:04:08Z) - Text-Guided 3D Face Synthesis -- From Generation to Editing [53.86765812392627]
We propose a unified text-guided framework from face generation to editing.
We employ a fine-tuned texture diffusion model to enhance texture quality in both RGB and YUV space.
We propose a self-guided consistency weight strategy to improve editing efficacy while preserving consistency.
arXiv Detail & Related papers (2023-12-01T06:36:23Z) - Fine-Grained Face Swapping via Regional GAN Inversion [18.537407253864508]
We present a novel paradigm for high-fidelity face swapping that faithfully preserves the desired subtle geometry and texture details.
We propose a framework that is based on the explicit disentanglement of the shape and texture of facial components.
At the core of our system lies a novel Regional GAN Inversion (RGI) method, which allows the explicit disentanglement of shape and texture.
arXiv Detail & Related papers (2022-11-25T12:40:45Z) - NeRFFaceEditing: Disentangled Face Editing in Neural Radiance Fields [40.543998582101146]
We introduce NeRFFaceEditing, which enables editing and decoupling geometry and appearance in neural radiance fields.
Our method allows users to edit via semantic masks with decoupled control of geometry and appearance.
Both qualitative and quantitative evaluations show the superior geometry and appearance control abilities of our method compared to existing and alternative solutions.
arXiv Detail & Related papers (2022-11-15T08:11:39Z) - Generative Neural Articulated Radiance Fields [104.9224190002448]
We develop a 3D GAN framework that learns to generate radiance fields of human bodies in a canonical pose and warp them using an explicit deformation field into a desired body pose or facial expression.
We show that our deformation-aware training procedure significantly improves the quality of generated bodies or faces when editing their poses or facial expressions.
arXiv Detail & Related papers (2022-06-28T22:49:42Z) - FEAT: Face Editing with Attention [70.89233432407305]
We build on the StyleGAN generator and present a method that explicitly encourages face manipulation to focus on the intended regions.
During the generation of the edited image, the attention map serves as a mask that guides a blending between the original features and the modified ones.
arXiv Detail & Related papers (2022-02-06T06:07:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.