Light Up Your Face: A Physically Consistent Dataset and Diffusion Model for Face Fill-Light Enhancement
- URL: http://arxiv.org/abs/2602.04300v1
- Date: Wed, 04 Feb 2026 08:03:41 GMT
- Title: Light Up Your Face: A Physically Consistent Dataset and Diffusion Model for Face Fill-Light Enhancement
- Authors: Jue Gong, Zihan Zhou, Jingkai Wang, Xiaohong Liu, Yulun Zhang, Xiaokang Yang,
- Abstract summary: Face fill-light enhancement (FFE) brightens underexposed faces by adding virtual fill light while keeping the original scene illumination and background unchanged.<n>Most face relighting methods aim to reshape overall lighting, which can suppress the input illumination or modify the entire scene.<n>To support scalable learning, we introduce LightYourFace-160K (LYF-160K), a large-scale paired dataset built with a physically consistent fill light.
- Score: 49.28352100233792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face fill-light enhancement (FFE) brightens underexposed faces by adding virtual fill light while keeping the original scene illumination and background unchanged. Most face relighting methods aim to reshape overall lighting, which can suppress the input illumination or modify the entire scene, leading to foreground-background inconsistency and mismatching practical FFE needs. To support scalable learning, we introduce LightYourFace-160K (LYF-160K), a large-scale paired dataset built with a physically consistent renderer that injects a disk-shaped area fill light controlled by six disentangled factors, producing 160K before-and-after pairs. We first pretrain a physics-aware lighting prompt (PALP) that embeds the 6D parameters into conditioning tokens, using an auxiliary planar-light reconstruction objective. Building on a pretrained diffusion backbone, we then train a fill-light diffusion (FiLitDiff), an efficient one-step model conditioned on physically grounded lighting codes, enabling controllable and high-fidelity fill lighting at low computational cost. Experiments on held-out paired sets demonstrate strong perceptual quality and competitive full-reference metrics, while better preserving background illumination. The dataset and model will be at https://github.com/gobunu/Light-Up-Your-Face.
Related papers
- POLAR: A Portrait OLAT Dataset and Generative Framework for Illumination-Aware Face Modeling [51.7495375918484]
Face relighting aims to synthesize realistic portraits under novel illumination while preserving identity and geometry.<n>We introduce POLAR, a large-scale and physically calibrated One-Light-at-a-Time dataset containing over 200 subjects captured under 156 lighting directions, multiple views, and diverse expressions.<n>We develop a flow-based generative model POLARNet that predicts per-light OLAT responses from a single portrait, capturing fine-grained and direction-aware illumination effects while preserving facial identity.
arXiv Detail & Related papers (2025-12-15T11:04:09Z) - RelightMaster: Precise Video Relighting with Multi-plane Light Images [59.56389629981934]
RelightMaster is a novel framework for accurate and controllable video relighting.<n>It generates physically plausible lighting and shadows and preserves original scene content.
arXiv Detail & Related papers (2025-11-09T08:12:09Z) - DreamLight: Towards Harmonious and Consistent Image Relighting [41.90032795389507]
We introduce a model named DreamLight for universal image relighting.<n>It can seamlessly composite subjects into a new background while maintaining aesthetic uniformity in terms of lighting and color tone.
arXiv Detail & Related papers (2025-06-17T14:05:24Z) - LightLab: Controlling Light Sources in Images with Diffusion Models [49.83835236202516]
We present a diffusion-based method for fine-grained, parametric control over light sources in an image.<n>We leverage the linearity of light to synthesize image pairs depicting controlled light changes of either a target light source or ambient illumination.<n>We show how our method can achieve compelling light editing results, and outperforms existing methods based on user preference.
arXiv Detail & Related papers (2025-05-14T17:57:27Z) - DifFRelight: Diffusion-Based Facial Performance Relighting [12.909429637057343]
We present a novel framework for free-viewpoint facial performance relighting using diffusion-based image-to-image translation.
We train a diffusion model for precise lighting control, enabling high-fidelity relit facial images from flat-lit inputs.
The model accurately reproduces complex lighting effects like eye reflections, subsurface scattering, self-shadowing, and translucency.
arXiv Detail & Related papers (2024-10-10T17:56:44Z) - Zero-Reference Low-Light Enhancement via Physical Quadruple Priors [58.77377454210244]
We propose a new zero-reference low-light enhancement framework trainable solely with normal light images.
This framework is able to restore our illumination-invariant prior back to images, automatically achieving low-light enhancement.
arXiv Detail & Related papers (2024-03-19T17:36:28Z) - DiFaReli++: Diffusion Face Relighting with Consistent Cast Shadows [11.566896201650056]
We introduce a novel approach to single-view face relighting in the wild, addressing challenges such as global illumination and cast shadows.<n>We propose a single-shot relighting framework that requires just one network pass, given pre-processed data, and even outperforms the teacher model across all metrics.
arXiv Detail & Related papers (2023-04-19T08:03:20Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.