URHand: Universal Relightable Hands
- URL: http://arxiv.org/abs/2401.05334v1
- Date: Wed, 10 Jan 2024 18:59:51 GMT
- Title: URHand: Universal Relightable Hands
- Authors: Zhaoxi Chen, Gyeongsik Moon, Kaiwen Guo, Chen Cao, Stanislav
Pidhorskyi, Tomas Simon, Rohan Joshi, Yuan Dong, Yichen Xu, Bernardo Pires,
He Wen, Lucas Evans, Bo Peng, Julia Buffalini, Autumn Trimble, Kevyn McPhail,
Melissa Schoeller, Shoou-I Yu, Javier Romero, Michael Zollh\"ofer, Yaser
Sheikh, Ziwei Liu, Shunsuke Saito
- Abstract summary: We present URHand, the first universal relightable hand model that generalizes across viewpoints, poses, illuminations, and identities.
Our model allows few-shot personalization using images captured with a mobile phone, and is ready to be photorealistically rendered under novel illuminations.
- Score: 64.25893653236912
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing photorealistic relightable hand models require extensive
identity-specific observations in different views, poses, and illuminations,
and face challenges in generalizing to natural illuminations and novel
identities. To bridge this gap, we present URHand, the first universal
relightable hand model that generalizes across viewpoints, poses,
illuminations, and identities. Our model allows few-shot personalization using
images captured with a mobile phone, and is ready to be photorealistically
rendered under novel illuminations. To simplify the personalization process
while retaining photorealism, we build a powerful universal relightable prior
based on neural relighting from multi-view images of hands captured in a light
stage with hundreds of identities. The key challenge is scaling the
cross-identity training while maintaining personalized fidelity and sharp
details without compromising generalization under natural illuminations. To
this end, we propose a spatially varying linear lighting model as the neural
renderer that takes physics-inspired shading as input feature. By removing
non-linear activations and bias, our specifically designed lighting model
explicitly keeps the linearity of light transport. This enables single-stage
training from light-stage data while generalizing to real-time rendering under
arbitrary continuous illuminations across diverse identities. In addition, we
introduce the joint learning of a physically based model and our neural
relighting model, which further improves fidelity and generalization. Extensive
experiments show that our approach achieves superior performance over existing
methods in terms of both quality and generalizability. We also demonstrate
quick personalization of URHand from a short phone scan of an unseen identity.
Related papers
- URAvatar: Universal Relightable Gaussian Codec Avatars [42.25313535192927]
We present a new approach to creating photorealistic and relightable head avatars from a phone scan with unknown illumination.
The reconstructed avatars can be animated and relit in real time with the global illumination of diverse environments.
arXiv Detail & Related papers (2024-10-31T17:59:56Z) - DifFRelight: Diffusion-Based Facial Performance Relighting [12.909429637057343]
We present a novel framework for free-viewpoint facial performance relighting using diffusion-based image-to-image translation.
We train a diffusion model for precise lighting control, enabling high-fidelity relit facial images from flat-lit inputs.
The model accurately reproduces complex lighting effects like eye reflections, subsurface scattering, self-shadowing, and translucency.
arXiv Detail & Related papers (2024-10-10T17:56:44Z) - Relightable Neural Actor with Intrinsic Decomposition and Pose Control [80.06094206522668]
We propose Relightable Neural Actor, a new video-based method for learning a pose-driven neural human model that can be relighted.
For training, our method solely requires a multi-view recording of the human under a known, but static lighting condition.
To evaluate our approach in real-world scenarios, we collect a new dataset with four identities recorded under different light conditions, indoors and outdoors.
arXiv Detail & Related papers (2023-12-18T14:30:13Z) - RelightableHands: Efficient Neural Relighting of Articulated Hand Models [46.60594572471557]
We present the first neural relighting approach for rendering high-fidelity personalized hands that can be animated in real-time under novel illumination.
Our approach adopts a teacher-student framework, where the teacher learns appearance under a single point light from images captured in a light-stage.
Using images rendered by the teacher model as training data, an efficient student model directly predicts appearance under natural illuminations in real-time.
arXiv Detail & Related papers (2023-02-09T18:59:48Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Neural Video Portrait Relighting in Real-time via Consistency Modeling [41.04622998356025]
We propose a neural approach for real-time, high-quality and coherent video portrait relighting.
We propose a hybrid structure and lighting disentanglement in an encoder-decoder architecture.
We also propose a lighting sampling strategy to model the illumination consistency and mutation for natural portrait light manipulation in real-world.
arXiv Detail & Related papers (2021-04-01T14:13:28Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.