Eye Motion Matters for 3D Face Reconstruction
- URL: http://arxiv.org/abs/2401.09677v1
- Date: Thu, 18 Jan 2024 01:47:55 GMT
- Title: Eye Motion Matters for 3D Face Reconstruction
- Authors: Xuan Wang, Mengyuan Liu
- Abstract summary: We introduce an Eye Landmark Adjustment Module, complemented by a Local Dynamic Loss, to capture the dynamic features of the eyes area.
Our module allows for flexible adjustment of landmarks, resulting in accurate recreation of various eye states.
- Score: 13.633246294557765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in single-image 3D face reconstruction have shown remarkable
progress in various applications. Nevertheless, prevailing techniques tend to
prioritize the global facial contour and expression, often neglecting the
nuanced dynamics of the eye region. In response, we introduce an Eye Landmark
Adjustment Module, complemented by a Local Dynamic Loss, designed to capture
the dynamic features of the eyes area. Our module allows for flexible
adjustment of landmarks, resulting in accurate recreation of various eye
states. In this paper, we present a comprehensive evaluation of our approach,
conducting extensive experiments on two datasets. The results underscore the
superior performance of our approach, highlighting its significant
contributions in addressing this particular challenge.
Related papers
- Advances in Radiance Field for Dynamic Scene: From Neural Field to Gaussian Field [85.12359852781216]
This survey presents a systematic analysis of over 200 papers focused on dynamic scene representation using radiance field.<n>We organize diverse methodological approaches under a unified representational framework, concluding with a critical examination of persistent challenges and promising research directions.
arXiv Detail & Related papers (2025-05-15T07:51:08Z) - Dynamic Scene Reconstruction: Recent Advance in Real-time Rendering and Streaming [7.250878248686215]
Representing and rendering dynamic scenes from 2D images is a fundamental yet challenging problem in computer vision and graphics.
This survey provides a comprehensive review of the evolution and advancements in dynamic scene representation and rendering.
We systematically summarize existing approaches, categorize them according to their core principles, compile relevant datasets, compare the performance of various methods on these benchmarks, and explore the challenges and future research directions in this rapidly evolving field.
arXiv Detail & Related papers (2025-03-11T08:29:41Z) - MonST3R: A Simple Approach for Estimating Geometry in the Presence of Motion [118.74385965694694]
We present Motion DUSt3R (MonST3R), a novel geometry-first approach that directly estimates per-timestep geometry from dynamic scenes.
By simply estimating a pointmap for each timestep, we can effectively adapt DUST3R's representation, previously only used for static scenes, to dynamic scenes.
We show that by posing the problem as a fine-tuning task, identifying several suitable datasets, and strategically training the model on this limited data, we can surprisingly enable the model to handle dynamics.
arXiv Detail & Related papers (2024-10-04T18:00:07Z) - Shape of Motion: 4D Reconstruction from a Single Video [51.04575075620677]
We introduce a method capable of reconstructing generic dynamic scenes, featuring explicit, full-sequence-long 3D motion.
We exploit the low-dimensional structure of 3D motion by representing scene motion with a compact set of SE3 motion bases.
Our method achieves state-of-the-art performance for both long-range 3D/2D motion estimation and novel view synthesis on dynamic scenes.
arXiv Detail & Related papers (2024-07-18T17:59:08Z) - Modeling Ambient Scene Dynamics for Free-view Synthesis [31.233859111566613]
We introduce a novel method for dynamic free-view synthesis of an ambient scenes from a monocular capture.
Our method builds upon the recent advancements in 3D Gaussian Splatting (3DGS) that can faithfully reconstruct complex static scenes.
arXiv Detail & Related papers (2024-06-13T17:59:11Z) - Motion-aware 3D Gaussian Splatting for Efficient Dynamic Scene Reconstruction [89.53963284958037]
We propose a novel motion-aware enhancement framework for dynamic scene reconstruction.
Specifically, we first establish a correspondence between 3D Gaussian movements and pixel-level flow.
For the prevalent deformation-based paradigm that presents a harder optimization problem, a transient-aware deformation auxiliary module is proposed.
arXiv Detail & Related papers (2024-03-18T03:46:26Z) - Single-shot Tomography of Discrete Dynamic Objects [1.1407697960152927]
We present a novel method for the reconstruction of high-resolution temporal images in dynamic tomographic imaging.
The implications of this research extend to improved visualization and analysis of dynamic processes in tomographic imaging.
arXiv Detail & Related papers (2023-11-09T10:52:02Z) - Neural Point-based Volumetric Avatar: Surface-guided Neural Points for
Efficient and Photorealistic Volumetric Head Avatar [62.87222308616711]
We propose fullname (name), a method that adopts the neural point representation and the neural volume rendering process.
Specifically, the neural points are strategically constrained around the surface of the target expression via a high-resolution UV displacement map.
By design, our name is better equipped to handle topologically changing regions and thin structures while also ensuring accurate expression control when animating avatars.
arXiv Detail & Related papers (2023-07-11T03:40:10Z) - Decoupling Dynamic Monocular Videos for Dynamic View Synthesis [50.93409250217699]
We tackle the challenge of dynamic view synthesis from dynamic monocular videos in an unsupervised fashion.
Specifically, we decouple the motion of the dynamic objects into object motion and camera motion, respectively regularized by proposed unsupervised surface consistency and patch-based multi-view constraints.
arXiv Detail & Related papers (2023-04-04T11:25:44Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.