RelightAnyone: A Generalized Relightable 3D Gaussian Head Model
- URL: http://arxiv.org/abs/2601.03357v1
- Date: Tue, 06 Jan 2026 19:01:07 GMT
- Title: RelightAnyone: A Generalized Relightable 3D Gaussian Head Model
- Authors: Yingyan Xu, Pramod Rao, Sebastian Weiss, Gaspard Zoss, Markus Gross, Christian Theobalt, Marc Habermann, Derek Bradley,
- Abstract summary: 3D Gaussian Splatting (3DGS) has become a standard approach to reconstruct and render photorealistic 3D head avatars.<n>Existing methods require subjects to be captured under complex time-multiplexed illumination, such as one-light-at-a-time (OLAT)
- Score: 60.590427852071805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D Gaussian Splatting (3DGS) has become a standard approach to reconstruct and render photorealistic 3D head avatars. A major challenge is to relight the avatars to match any scene illumination. For high quality relighting, existing methods require subjects to be captured under complex time-multiplexed illumination, such as one-light-at-a-time (OLAT). We propose a new generalized relightable 3D Gaussian head model that can relight any subject observed in a single- or multi-view images without requiring OLAT data for that subject. Our core idea is to learn a mapping from flat-lit 3DGS avatars to corresponding relightable Gaussian parameters for that avatar. Our model consists of two stages: a first stage that models flat-lit 3DGS avatars without OLAT lighting, and a second stage that learns the mapping to physically-based reflectance parameters for high-quality relighting. This two-stage design allows us to train the first stage across diverse existing multi-view datasets without OLAT lighting ensuring cross-subject generalization, where we learn a dataset-specific lighting code for self-supervised lighting alignment. Subsequently, the second stage can be trained on a significantly smaller dataset of subjects captured under OLAT illumination. Together, this allows our method to generalize well and relight any subject from the first stage as if we had captured them under OLAT lighting. Furthermore, we can fit our model to unseen subjects from as little as a single image, allowing several applications in novel view synthesis and relighting for digital avatars.
Related papers
- HeadLighter: Disentangling Illumination in Generative 3D Gaussian Heads via Lightstage Captures [69.99269185793929]
Recent 3D-aware head generative models based on 3D Gaussian Splatting achieve real-time, photorealistic and view-consistent head synthesis.<n>Deep entanglement of illumination and intrinsic appearance prevents controllable relighting.<n>We introduce HeadLighter, a novel supervised framework that learns a physically plausible decomposition of appearance and illumination in head generative models.
arXiv Detail & Related papers (2026-01-05T13:32:37Z) - 3DPR: Single Image 3D Portrait Relight using Generative Priors [101.74130664920868]
3DPR is an image-based relighting model that leverages generative priors learnt from multi-view One-Light-at-A-Time (OLAT) images.<n>We leverage the latent space of a pre-trained generative head model that provides a rich prior over face geometry learnt from in-the-wild image datasets.<n>Our reflectance network operates in the latent space of the generative head model, crucially enabling a relatively small number of lightstage images to train the reflectance model.
arXiv Detail & Related papers (2025-10-17T17:37:42Z) - URAvatar: Universal Relightable Gaussian Codec Avatars [42.25313535192927]
We present a new approach to creating photorealistic and relightable head avatars from a phone scan with unknown illumination.
The reconstructed avatars can be animated and relit in real time with the global illumination of diverse environments.
arXiv Detail & Related papers (2024-10-31T17:59:56Z) - A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis [6.883971329818549]
We introduce a method to create relightable radiance fields using single-illumination data.
We first fine-tune a 2D diffusion model on a multi-illumination dataset conditioned by light direction.
We show results on synthetic and real multi-view data under single illumination.
arXiv Detail & Related papers (2024-09-13T16:07:25Z) - Lite2Relight: 3D-aware Single Image Portrait Relighting [87.62069509622226]
Lite2Relight is a novel technique that can predict 3D consistent head poses of portraits.
By utilizing a pre-trained geometry-aware encoder and a feature alignment module, we map input images into a relightable 3D space.
This includes producing 3D-consistent results of the full head, including hair, eyes, and expressions.
arXiv Detail & Related papers (2024-07-15T07:16:11Z) - MetaGS: A Meta-Learned Gaussian-Phong Model for Out-of-Distribution 3D Scene Relighting [63.5925701087252]
Out-of-distribution (OOD) 3D relighting requires novel view synthesis under unseen lighting conditions.<n>We introduce MetaGS to tackle this challenge from two perspectives.
arXiv Detail & Related papers (2024-05-31T13:48:54Z) - MoSAR: Monocular Semi-Supervised Model for Avatar Reconstruction using
Differentiable Shading [3.2586340344073927]
MoSAR is a method for 3D avatar generation from monocular images.
We propose a semi-supervised training scheme that improves generalization by learning from both light stage and in-the-wild datasets.
We also introduce a new dataset, named FFHQ-UV-Intrinsics, the first public dataset providing intrinsic face attributes at scale.
arXiv Detail & Related papers (2023-12-20T15:12:53Z) - DiFaReli++: Diffusion Face Relighting with Consistent Cast Shadows [11.566896201650056]
We introduce a novel approach to single-view face relighting in the wild, addressing challenges such as global illumination and cast shadows.<n>We propose a single-shot relighting framework that requires just one network pass, given pre-processed data, and even outperforms the teacher model across all metrics.
arXiv Detail & Related papers (2023-04-19T08:03:20Z) - DeepPS2: Revisiting Photometric Stereo Using Two Differently Illuminated
Images [27.58399208954106]
Photometric stereo is a problem of recovering 3D surface normals using images of an object captured under different lightings.
We propose an inverse rendering-based deep learning framework, called DeepPS2, that jointly performs surface normal, albedo, lighting estimation, and image relighting.
arXiv Detail & Related papers (2022-07-05T13:14:10Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.