Relightable Holoported Characters: Capturing and Relighting Dynamic Human Performance from Sparse Views
- URL: http://arxiv.org/abs/2512.00255v1
- Date: Sat, 29 Nov 2025 00:17:34 GMT
- Title: Relightable Holoported Characters: Capturing and Relighting Dynamic Human Performance from Sparse Views
- Authors: Kunwar Maheep Singh, Jianchun Chen, Vladislav Golyanik, Stephan J. Garbin, Thabo Beeler, Rishabh Dabral, Marc Habermann, Christian Theobalt,
- Abstract summary: We present Relightable Holoported Characters (RHC), a person-specific method for free-view rendering and relighting of full-body and highly dynamic humans.<n>Our transformer-based RelightNet predicts relit appearance within a single network pass, avoiding costly OLAT-basis capture and generation.<n>Experiments demonstrate our method's superior visual fidelity and lighting reproduction compared to state-of-the-art approaches.
- Score: 82.15089065452081
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Relightable Holoported Characters (RHC), a novel person-specific method for free-view rendering and relighting of full-body and highly dynamic humans solely observed from sparse-view RGB videos at inference. In contrast to classical one-light-at-a-time (OLAT)-based human relighting, our transformer-based RelightNet predicts relit appearance within a single network pass, avoiding costly OLAT-basis capture and generation. For training such a model, we introduce a new capture strategy and dataset recorded in a multi-view lightstage, where we alternate frames lit by random environment maps with uniformly lit tracking frames, simultaneously enabling accurate motion tracking and diverse illumination as well as dynamics coverage. Inspired by the rendering equation, we derive physics-informed features that encode geometry, albedo, shading, and the virtual camera view from a coarse human mesh proxy and the input views. Our RelightNet then takes these features as input and cross-attends them with a novel lighting condition, and regresses the relit appearance in the form of texel-aligned 3D Gaussian splats attached to the coarse mesh proxy. Consequently, our RelightNet implicitly learns to efficiently compute the rendering equation for novel lighting conditions within a single feed-forward pass. Experiments demonstrate our method's superior visual fidelity and lighting reproduction compared to state-of-the-art approaches. Project page: https://vcai.mpi-inf.mpg.de/projects/RHC/
Related papers
- UniLumos: Fast and Unified Image and Video Relighting with Physics-Plausible Feedback [31.03901228901908]
We present UniLumos, a unified relighting framework for both images and videos.<n>We explicitly align lighting effects with the scene structure, enhancing physical plausibility.<n>Experiments demonstrate that UniLumos achieves state-of-the-art relighting with significantly improved physical consistency.
arXiv Detail & Related papers (2025-11-03T15:41:41Z) - Inverse Image-Based Rendering for Light Field Generation from Single Images [30.856397422416517]
We propose a novel view synthesis method for light field generation from only single images, named inverse image-based rendering.<n>Our method reconstructs light flows in a space from image pixels, which behaves in the opposite way to image-based rendering.<n>Our neural first stores the light flow of source rays from the input image, then computes the relationships among them through cross-attention.
arXiv Detail & Related papers (2025-10-23T02:12:45Z) - 3DPR: Single Image 3D Portrait Relight using Generative Priors [101.74130664920868]
3DPR is an image-based relighting model that leverages generative priors learnt from multi-view One-Light-at-A-Time (OLAT) images.<n>We leverage the latent space of a pre-trained generative head model that provides a rich prior over face geometry learnt from in-the-wild image datasets.<n>Our reflectance network operates in the latent space of the generative head model, crucially enabling a relatively small number of lightstage images to train the reflectance model.
arXiv Detail & Related papers (2025-10-17T17:37:42Z) - BecomingLit: Relightable Gaussian Avatars with Hybrid Neural Shading [3.447848701446988]
We introduce BecomingLit, a novel method for reconstructing relightable, high-resolution head avatars that can be rendered from novel viewpoints at interactive rates.<n>We collect a novel dataset consisting of diverse multi-view sequences of numerous subjects under varying illumination conditions.<n>We propose a new hybrid neural shading approach, combining a neural diffuse BRDF with an analytical specular term.
arXiv Detail & Related papers (2025-06-06T17:53:58Z) - BEAM: Bridging Physically-based Rendering and Gaussian Modeling for Relightable Volumetric Video [58.97416204208624]
We present BEAM, a novel pipeline that bridges 4D Gaussian representations with physically-based rendering (PBR) to produce high-quality, relightable videos.<n>By offering realistic, lifelike visualizations under diverse lighting conditions, BEAM opens new possibilities for interactive entertainment, storytelling, and creative visualization.
arXiv Detail & Related papers (2025-02-12T10:58:09Z) - D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video [53.83936023443193]
This paper contributes to the field by introducing a new synthesis method for dynamic novel view from monocular video, such as smartphone captures.<n>Our approach represents the as a $textitdynamic neural point cloud$, an implicit time-conditioned point cloud that encodes local geometry and appearance in separate hash-encoded neural feature grids.
arXiv Detail & Related papers (2024-06-14T14:35:44Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - Relightable Gaussian Codec Avatars [26.255161061306428]
We present Relightable Gaussian Codec Avatars, a method to build high-fidelity relightable head avatars that can be animated to generate novel expressions.
Our geometry model based on 3D Gaussians can capture 3D-consistent sub-millimeter details such as hair strands and pores on dynamic face sequences.
We improve the fidelity of eye reflections and enable explicit gaze control by introducing relightable explicit eye models.
arXiv Detail & Related papers (2023-12-06T18:59:58Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.