RelightableHands: Efficient Neural Relighting of Articulated Hand Models
- URL: http://arxiv.org/abs/2302.04866v1
- Date: Thu, 9 Feb 2023 18:59:48 GMT
- Title: RelightableHands: Efficient Neural Relighting of Articulated Hand Models
- Authors: Shun Iwase, Shunsuke Saito, Tomas Simon, Stephen Lombardi, Timur
Bagautdinov, Rohan Joshi, Fabian Prada, Takaaki Shiratori, Yaser Sheikh,
Jason Saragih
- Abstract summary: We present the first neural relighting approach for rendering high-fidelity personalized hands that can be animated in real-time under novel illumination.
Our approach adopts a teacher-student framework, where the teacher learns appearance under a single point light from images captured in a light-stage.
Using images rendered by the teacher model as training data, an efficient student model directly predicts appearance under natural illuminations in real-time.
- Score: 46.60594572471557
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the first neural relighting approach for rendering high-fidelity
personalized hands that can be animated in real-time under novel illumination.
Our approach adopts a teacher-student framework, where the teacher learns
appearance under a single point light from images captured in a light-stage,
allowing us to synthesize hands in arbitrary illuminations but with heavy
compute. Using images rendered by the teacher model as training data, an
efficient student model directly predicts appearance under natural
illuminations in real-time. To achieve generalization, we condition the student
model with physics-inspired illumination features such as visibility, diffuse
shading, and specular reflections computed on a coarse proxy geometry,
maintaining a small computational overhead. Our key insight is that these
features have strong correlation with subsequent global light transport
effects, which proves sufficient as conditioning data for the neural relighting
network. Moreover, in contrast to bottleneck illumination conditioning, these
features are spatially aligned based on underlying geometry, leading to better
generalization to unseen illuminations and poses. In our experiments, we
demonstrate the efficacy of our illumination feature representations,
outperforming baseline approaches. We also show that our approach can
photorealistically relight two interacting hands at real-time speeds.
https://sh8.io/#/relightable_hands
Related papers
- Baking Relightable NeRF for Real-time Direct/Indirect Illumination Rendering [4.812321790984493]
Real-time relighting is challenging due to high computation cost of the rendering equation.
We propose a novel method that executes a CNN to compute primary surface points and rendering parameters.
Both distillations are trained from a pre-trained teacher model and provide real-time physically-based rendering under unseen lighting condition.
arXiv Detail & Related papers (2024-09-16T14:38:26Z) - Zero-Reference Low-Light Enhancement via Physical Quadruple Priors [58.77377454210244]
We propose a new zero-reference low-light enhancement framework trainable solely with normal light images.
This framework is able to restore our illumination-invariant prior back to images, automatically achieving low-light enhancement.
arXiv Detail & Related papers (2024-03-19T17:36:28Z) - URHand: Universal Relightable Hands [64.25893653236912]
We present URHand, the first universal relightable hand model that generalizes across viewpoints, poses, illuminations, and identities.
Our model allows few-shot personalization using images captured with a mobile phone, and is ready to be photorealistically rendered under novel illuminations.
arXiv Detail & Related papers (2024-01-10T18:59:51Z) - Physics-based Indirect Illumination for Inverse Rendering [70.27534648770057]
We present a physics-based inverse rendering method that learns the illumination, geometry, and materials of a scene from posed multi-view RGB images.
As a side product, our physics-based inverse rendering model also facilitates flexible and realistic material editing as well as relighting.
arXiv Detail & Related papers (2022-12-09T07:33:49Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Modeling Indirect Illumination for Inverse Rendering [31.734819333921642]
In this paper, we propose a novel approach to efficiently recovering spatially-varying indirect illumination.
The key insight is that indirect illumination can be conveniently derived from the neural radiance field learned from input images.
Experiments on both synthetic and real data demonstrate the superior performance of our approach compared to previous work.
arXiv Detail & Related papers (2022-04-14T09:10:55Z) - Neural Relightable Participating Media Rendering [26.431106015677]
We learn neural representations for participating media with a complete simulation of global illumination.
Our approach achieves superior visual quality and numerical performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-10-25T14:36:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.