HeadLighter: Disentangling Illumination in Generative 3D Gaussian Heads via Lightstage Captures
- URL: http://arxiv.org/abs/2601.02103v1
- Date: Mon, 05 Jan 2026 13:32:37 GMT
- Title: HeadLighter: Disentangling Illumination in Generative 3D Gaussian Heads via Lightstage Captures
- Authors: Yating Wang, Yuan Sun, Xuan Wang, Ran Yi, Boyao Zhou, Yipengjing Sun, Hongyu Liu, Yinuo Wang, Lizhuang Ma,
- Abstract summary: Recent 3D-aware head generative models based on 3D Gaussian Splatting achieve real-time, photorealistic and view-consistent head synthesis.<n>Deep entanglement of illumination and intrinsic appearance prevents controllable relighting.<n>We introduce HeadLighter, a novel supervised framework that learns a physically plausible decomposition of appearance and illumination in head generative models.
- Score: 69.99269185793929
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent 3D-aware head generative models based on 3D Gaussian Splatting achieve real-time, photorealistic and view-consistent head synthesis. However, a fundamental limitation persists: the deep entanglement of illumination and intrinsic appearance prevents controllable relighting. Existing disentanglement methods rely on strong assumptions to enable weakly supervised learning, which restricts their capacity for complex illumination. To address this challenge, we introduce HeadLighter, a novel supervised framework that learns a physically plausible decomposition of appearance and illumination in head generative models. Specifically, we design a dual-branch architecture that separately models lighting-invariant head attributes and physically grounded rendering components. A progressive disentanglement training is employed to gradually inject head appearance priors into the generative architecture, supervised by multi-view images captured under controlled light conditions with a light stage setup. We further introduce a distillation strategy to generate high-quality normals for realistic rendering. Experiments demonstrate that our method preserves high-quality generation and real-time rendering, while simultaneously supporting explicit lighting and viewpoint editing. We will publicly release our code and dataset.
Related papers
- RelightAnyone: A Generalized Relightable 3D Gaussian Head Model [60.590427852071805]
3D Gaussian Splatting (3DGS) has become a standard approach to reconstruct and render photorealistic 3D head avatars.<n>Existing methods require subjects to be captured under complex time-multiplexed illumination, such as one-light-at-a-time (OLAT)
arXiv Detail & Related papers (2026-01-06T19:01:07Z) - SplatBright: Generalizable Low-Light Scene Reconstruction from Sparse Views via Physically-Guided Gaussian Enhancement [26.905118897488077]
SplatBright is the first generalizable 3D Gaussian framework for joint low-light enhancement and reconstruction from sparse sRGB inputs.<n>Our key idea is to integrate physically guided illumination modeling with geometry-appearance decoupling for consistent low-light reconstruction.<n>Experiments on public and self-collected datasets demonstrate that SplatBright achieves superior novel view synthesis, cross-view consistency, and better generalization to unseen low-light scenes compared with both 2D and 3D methods.
arXiv Detail & Related papers (2025-12-21T09:06:16Z) - 3DPR: Single Image 3D Portrait Relight using Generative Priors [101.74130664920868]
3DPR is an image-based relighting model that leverages generative priors learnt from multi-view One-Light-at-A-Time (OLAT) images.<n>We leverage the latent space of a pre-trained generative head model that provides a rich prior over face geometry learnt from in-the-wild image datasets.<n>Our reflectance network operates in the latent space of the generative head model, crucially enabling a relatively small number of lightstage images to train the reflectance model.
arXiv Detail & Related papers (2025-10-17T17:37:42Z) - GaRe: Relightable 3D Gaussian Splatting for Outdoor Scenes from Unconstrained Photo Collections [19.8966661817631]
We propose a 3D Gaussian splatting-based framework for outdoor relighting.<n>Our approach enables simultaneously diverse shading manipulation and the generation of dynamic shadow effects.
arXiv Detail & Related papers (2025-07-28T04:29:57Z) - Learning to Decouple the Lights for 3D Face Texture Modeling [71.67854540658472]
We introduce a novel approach to model 3D facial textures under such unnatural illumination.<n>Our framework learns to imitate the unnatural illumination as a composition of multiple separate light conditions.<n>According to experiments on both single images and video sequences, we demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-12-11T16:36:45Z) - Lite2Relight: 3D-aware Single Image Portrait Relighting [87.62069509622226]
Lite2Relight is a novel technique that can predict 3D consistent head poses of portraits.
By utilizing a pre-trained geometry-aware encoder and a feature alignment module, we map input images into a relightable 3D space.
This includes producing 3D-consistent results of the full head, including hair, eyes, and expressions.
arXiv Detail & Related papers (2024-07-15T07:16:11Z) - Relightable Gaussian Codec Avatars [26.255161061306428]
We present Relightable Gaussian Codec Avatars, a method to build high-fidelity relightable head avatars that can be animated to generate novel expressions.
Our geometry model based on 3D Gaussians can capture 3D-consistent sub-millimeter details such as hair strands and pores on dynamic face sequences.
We improve the fidelity of eye reflections and enable explicit gaze control by introducing relightable explicit eye models.
arXiv Detail & Related papers (2023-12-06T18:59:58Z) - FaceLit: Neural 3D Relightable Faces [28.0806453092185]
FaceLit is capable of generating a 3D face that can be rendered at various user-defined lighting conditions and views.
We show state-of-the-art photorealism among 3D aware GANs on FFHQ dataset achieving an FID score of 3.5.
arXiv Detail & Related papers (2023-03-27T17:59:10Z) - Towards High Fidelity Monocular Face Reconstruction with Rich
Reflectance using Self-supervised Learning and Ray Tracing [49.759478460828504]
Methods combining deep neural network encoders with differentiable rendering have opened up the path for very fast monocular reconstruction of geometry, lighting and reflectance.
ray tracing was introduced for monocular face reconstruction within a classic optimization-based framework.
We propose a new method that greatly improves reconstruction quality and robustness in general scenes.
arXiv Detail & Related papers (2021-03-29T08:58:10Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.