3D Dense Geometry-Guided Facial Expression Synthesis by Adversarial
Learning
- URL: http://arxiv.org/abs/2009.14798v1
- Date: Wed, 30 Sep 2020 17:12:35 GMT
- Title: 3D Dense Geometry-Guided Facial Expression Synthesis by Adversarial
Learning
- Authors: Rumeysa Bodur, Binod Bhattarai, Tae-Kyun Kim
- Abstract summary: We propose a novel framework to exploit 3D dense (depth and surface normals) information for expression manipulation.
We use an off-the-shelf state-of-the-art 3D reconstruction model to estimate the depth and create a large-scale RGB-Depth dataset.
Our experiments demonstrate that the proposed method outperforms the competitive baseline and existing arts by a large margin.
- Score: 54.24887282693925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Manipulating facial expressions is a challenging task due to fine-grained
shape changes produced by facial muscles and the lack of input-output pairs for
supervised learning. Unlike previous methods using Generative Adversarial
Networks (GAN), which rely on cycle-consistency loss or sparse geometry
(landmarks) loss for expression synthesis, we propose a novel GAN framework to
exploit 3D dense (depth and surface normals) information for expression
manipulation. However, a large-scale dataset containing RGB images with
expression annotations and their corresponding depth maps is not available. To
this end, we propose to use an off-the-shelf state-of-the-art 3D reconstruction
model to estimate the depth and create a large-scale RGB-Depth dataset after a
manual data clean-up process. We utilise this dataset to minimise the novel
depth consistency loss via adversarial learning (note we do not have ground
truth depth maps for generated face images) and the depth categorical loss of
synthetic data on the discriminator. In addition, to improve the generalisation
and lower the bias of the depth parameters, we propose to use a novel
confidence regulariser on the discriminator side of the framework. We
extensively performed both quantitative and qualitative evaluations on two
publicly available challenging facial expression benchmarks: AffectNet and
RaFD. Our experiments demonstrate that the proposed method outperforms the
competitive baseline and existing arts by a large margin.
Related papers
- Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - PaintHuman: Towards High-fidelity Text-to-3D Human Texturing via
Denoised Score Distillation [89.09455618184239]
Recent advances in text-to-3D human generation have been groundbreaking.
We propose a model called PaintHuman to address the challenges from two aspects.
We use the depth map as a guidance to ensure realistic semantically aligned textures.
arXiv Detail & Related papers (2023-10-14T00:37:16Z) - Improving Neural Indoor Surface Reconstruction with Mask-Guided Adaptive
Consistency Constraints [0.6749750044497732]
We propose a two-stage training process, decouple view-dependent and view-independent colors, and leverage two novel consistency constraints to enhance detail reconstruction performance without requiring extra priors.
Experiments on synthetic and real-world datasets show the capability of reducing the interference from prior estimation errors.
arXiv Detail & Related papers (2023-09-18T13:05:23Z) - Robust Geometry-Preserving Depth Estimation Using Differentiable
Rendering [93.94371335579321]
We propose a learning framework that trains models to predict geometry-preserving depth without requiring extra data or annotations.
Comprehensive experiments underscore our framework's superior generalization capabilities.
Our innovative loss functions empower the model to autonomously recover domain-specific scale-and-shift coefficients.
arXiv Detail & Related papers (2023-09-18T12:36:39Z) - Self-supervised Human Mesh Recovery with Cross-Representation Alignment [20.69546341109787]
Self-supervised human mesh recovery methods have poor generalizability due to limited availability and diversity of 3D-annotated benchmark datasets.
We propose cross-representation alignment utilizing the complementary information from the robust but sparse representation (2D keypoints)
This adaptive cross-representation alignment explicitly learns from the deviations and captures complementary information: richness from sparse representation and robustness from dense representation.
arXiv Detail & Related papers (2022-09-10T04:47:20Z) - Uncertainty-Aware Adaptation for Self-Supervised 3D Human Pose
Estimation [70.32536356351706]
We introduce MRP-Net that constitutes a common deep network backbone with two output heads subscribing to two diverse configurations.
We derive suitable measures to quantify prediction uncertainty at both pose and joint level.
We present a comprehensive evaluation of the proposed approach and demonstrate state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-03-29T07:14:58Z) - Sparse Depth Completion with Semantic Mesh Deformation Optimization [4.03103540543081]
We propose a neural network with post-optimization, which takes an RGB image and sparse depth samples as input and predicts the complete depth map.
Our evaluation results outperform the existing work consistently on both indoor and outdoor datasets.
arXiv Detail & Related papers (2021-12-10T13:01:06Z) - Implicit Neural Deformation for Multi-View Face Reconstruction [43.88676778013593]
We present a new method for 3D face reconstruction from multi-view RGB images.
Unlike previous methods which are built upon 3D morphable models, our method leverages an implicit representation to encode rich geometric features.
Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-05T07:02:53Z) - Virtual Normal: Enforcing Geometric Constraints for Accurate and Robust
Depth Prediction [87.08227378010874]
We show the importance of the high-order 3D geometric constraints for depth prediction.
By designing a loss term that enforces a simple geometric constraint, we significantly improve the accuracy and robustness of monocular depth estimation.
We show state-of-the-art results of learning metric depth on NYU Depth-V2 and KITTI.
arXiv Detail & Related papers (2021-03-07T00:08:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.