MEGANE: Morphable Eyeglass and Avatar Network
- URL: http://arxiv.org/abs/2302.04868v1
- Date: Thu, 9 Feb 2023 18:59:49 GMT
- Title: MEGANE: Morphable Eyeglass and Avatar Network
- Authors: Junxuan Li, Shunsuke Saito, Tomas Simon, Stephen Lombardi, Hongdong
Li, Jason Saragih
- Abstract summary: We propose a 3D compositional morphable model of eyeglasses.
We employ a hybrid representation that combines surface geometry and a volumetric representation.
Our approach models global light transport effects, such as casting shadows between faces and glasses.
- Score: 83.65790119755053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Eyeglasses play an important role in the perception of identity. Authentic
virtual representations of faces can benefit greatly from their inclusion.
However, modeling the geometric and appearance interactions of glasses and the
face of virtual representations of humans is challenging. Glasses and faces
affect each other's geometry at their contact points, and also induce
appearance changes due to light transport. Most existing approaches do not
capture these physical interactions since they model eyeglasses and faces
independently. Others attempt to resolve interactions as a 2D image synthesis
problem and suffer from view and temporal inconsistencies. In this work, we
propose a 3D compositional morphable model of eyeglasses that accurately
incorporates high-fidelity geometric and photometric interaction effects. To
support the large variation in eyeglass topology efficiently, we employ a
hybrid representation that combines surface geometry and a volumetric
representation. Unlike volumetric approaches, our model naturally retains
correspondences across glasses, and hence explicit modification of geometry,
such as lens insertion and frame deformation, is greatly simplified. In
addition, our model is relightable under point lights and natural illumination,
supporting high-fidelity rendering of various frame materials, including
translucent plastic and metal within a single morphable model. Importantly, our
approach models global light transport effects, such as casting shadows between
faces and glasses. Our morphable model for eyeglasses can also be fit to novel
glasses via inverse rendering. We compare our approach to state-of-the-art
methods and demonstrate significant quality improvements.
Related papers
- GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations [54.94362657501809]
We propose a new method to generate highly dynamic and deformable human head avatars from multi-view imagery in real-time.
At the core of our method is a hierarchical representation of head models that allows to capture the complex dynamics of facial expressions and head movements.
We train this coarse-to-fine facial avatar model along with the head pose as a learnable parameter in an end-to-end framework.
arXiv Detail & Related papers (2024-09-18T13:05:43Z) - Mesh deformation-based single-view 3D reconstruction of thin eyeglasses frames with differentiable rendering [6.693246356011004]
We propose the first mesh deformation-based reconstruction framework for recovering high-precision 3D full-frame eyeglasses models from a single RGB image.
Experimental results on both the synthetic dataset and real images demonstrate the effectiveness of the proposed algorithm.
arXiv Detail & Related papers (2024-08-10T01:40:57Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - Decaf: Monocular Deformation Capture for Face and Hand Interactions [77.75726740605748]
This paper introduces the first method that allows tracking human hands interacting with human faces in 3D from single monocular RGB videos.
We model hands as articulated objects inducing non-rigid face deformations during an active interaction.
Our method relies on a new hand-face motion and interaction capture dataset with realistic face deformations acquired with a markerless multi-view camera system.
arXiv Detail & Related papers (2023-09-28T17:59:51Z) - FitMe: Deep Photorealistic 3D Morphable Model Avatars [119.03325450951074]
We introduce FitMe, a facial reflectance model and a differentiable rendering pipeline.
FitMe achieves state-of-the-art reflectance acquisition and identity preservation on single "in-the-wild" facial images.
In contrast with recent implicit avatar reconstructions, FitMe requires only one minute and produces relightable mesh and texture-based avatars.
arXiv Detail & Related papers (2023-05-16T17:42:45Z) - NEMTO: Neural Environment Matting for Novel View and Relighting Synthesis of Transparent Objects [28.62468618676557]
We propose NEMTO, the first end-to-end neural rendering pipeline to model 3D transparent objects.
With 2D images of the transparent object as input, our method is capable of high-quality novel view and relighting synthesis.
arXiv Detail & Related papers (2023-03-21T15:50:08Z) - EyeNeRF: A Hybrid Representation for Photorealistic Synthesis, Animation
and Relighting of Human Eyes [0.0]
We present a novel geometry and appearance representation that enables high-fidelity capture and animation, view synthesis and relighting of the eye region using only a sparse set of lights and cameras.
We show that for high-resolution close-ups of the eye, our model can synthesize high-fidelity animated gaze from novel views under unseen illumination conditions.
arXiv Detail & Related papers (2022-06-16T20:05:04Z) - I M Avatar: Implicit Morphable Head Avatars from Videos [68.13409777995392]
We propose IMavatar, a novel method for learning implicit head avatars from monocular videos.
Inspired by the fine-grained control mechanisms afforded by conventional 3DMMs, we represent the expression- and pose-related deformations via learned blendshapes and skinning fields.
We show quantitatively and qualitatively that our method improves geometry and covers a more complete expression space compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-14T15:30:32Z) - Refractive Light-Field Features for Curved Transparent Objects in
Structure from Motion [10.380414189465345]
We propose a novel image feature for light fields that detects and describes the patterns of light refracted through curved transparent objects.
We demonstrate improved structure-from-motion performance in challenging scenes containing refractive objects.
Our method is a critical step towards allowing robots to operate around refractive objects.
arXiv Detail & Related papers (2021-03-29T05:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.