Physics-Grounded Shadow Generation from Monocular 3D Geometry Priors and Approximate Light Direction
- URL: http://arxiv.org/abs/2512.06174v1
- Date: Fri, 05 Dec 2025 21:52:23 GMT
- Title: Physics-Grounded Shadow Generation from Monocular 3D Geometry Priors and Approximate Light Direction
- Authors: Shilin Hu, Jingyi Xu, Akshat Dave, Dimitris Samaras, Hieu Le,
- Abstract summary: In the physics of shadow formation, the occluder blocks some light rays casting from the light source that would otherwise arrive at the surface, creating a shadow that follows the silhouette of the occluder.<n>We propose a novel framework that embeds explicit physical modeling - geometry and illumination - into deep-learning-based shadow generation.
- Score: 48.727438709248
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Shadow generation aims to produce photorealistic shadows that are visually consistent with object geometry and scene illumination. In the physics of shadow formation, the occluder blocks some light rays casting from the light source that would otherwise arrive at the surface, creating a shadow that follows the silhouette of the occluder. However, such explicit physical modeling has rarely been used in deep-learning-based shadow generation. In this paper, we propose a novel framework that embeds explicit physical modeling - geometry and illumination - into deep-learning-based shadow generation. First, given a monocular RGB image, we obtain approximate 3D geometry in the form of dense point maps and predict a single dominant light direction. These signals allow us to recover fairly accurate shadow location and shape based on the physics of shadow formation. We then integrate this physics-based initial estimate into a diffusion framework that refines the shadow into a realistic, high-fidelity appearance while ensuring consistency with scene geometry and illumination. Trained on DESOBAV2, our model produces shadows that are both visually realistic and physically coherent, outperforming existing approaches, especially in scenes with complex geometry or ambiguous lighting.
Related papers
- Joint Shadow Generation and Relighting via Light-Geometry Interaction Maps [51.82696819319878]
We propose Light-Geometry Interaction maps, a novel representation that encodes light-aware occlusion from monocular depth.<n>LGI captures essential light-shadow interactions reliably and accurately, computed from off-the-shelf 2.5D depth map predictions.<n>By embedding LGI into a bridge-matching generative backbone, we reduce ambiguity and enforce physically consistent light-shadow reasoning.
arXiv Detail & Related papers (2026-02-25T11:47:26Z) - Physics-Grounded Attached Shadow Detection Using Approximate 3D Geometry and Light Direction [46.19532675330894]
Attached shadows occur on the surface of objects where light cannot reach because of self-occlusion.<n>We introduce a framework that jointly detects cast and attached shadows by reasoning about their mutual relationship with scene illumination and geometry.<n>Our system consists of a shadow detection module that predicts both shadow types separately, and a light estimation module that infers the light direction from the detected shadows.
arXiv Detail & Related papers (2025-12-05T22:01:27Z) - KPLM-STA: Physically-Accurate Shadow Synthesis for Human Relighting via Keypoint-Based Light Modeling [10.273365471847102]
We propose a novel shadow generation framework based on a Keypoints Linear Model (KPLM) and a Shadow Triangle Algorithm (STA)<n>KPLM models articulated human bodies using nine keypoints and one bounding block, enabling physically plausible shadow projection and dynamic shading across joints.<n>STA further improves geometric accuracy by computing shadow angles, lengths, and spatial positions through explicit geometric formulations.
arXiv Detail & Related papers (2025-11-11T12:28:42Z) - Generalizable and Relightable Gaussian Splatting for Human Novel View Synthesis [49.67420486373202]
GRGS is a generalizable and relightable 3D Gaussian framework for high-fidelity human novel view synthesis under diverse lighting conditions.<n>We introduce a Lighting-aware Geometry Refinement (LGR) module trained on synthetically relit data to predict accurate depth and surface normals.
arXiv Detail & Related papers (2025-05-27T17:59:47Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Controllable Shadow Generation Using Pixel Height Maps [58.59256060452418]
Physics-based shadow rendering methods require 3D geometries, which are not always available.
Deep learning-based shadow synthesis methods learn a mapping from the light information to an object's shadow without explicitly modeling the shadow geometry.
We introduce pixel heigh, a novel geometry representation that encodes the correlations between objects, ground, and camera pose.
arXiv Detail & Related papers (2022-07-12T08:29:51Z) - Shadows Shed Light on 3D Objects [23.14510850163136]
We create a differentiable image formation model that allows us to infer the 3D shape of an object, its pose, and the position of a light source.
Our approach is robust to real-world images where ground-truth shadow mask is unknown.
arXiv Detail & Related papers (2022-06-17T19:58:11Z) - Towards Learning Neural Representations from Shadows [11.60149896896201]
We present a method that learns neural scene representations from only shadows present in the scene.
Our framework is highly generalizable and can work alongside existing 3D reconstruction techniques.
arXiv Detail & Related papers (2022-03-29T23:13:41Z) - Neural Reflectance for Shape Recovery with Shadow Handling [88.67603644930466]
This paper aims at recovering the shape of a scene with unknown, non-Lambertian, and possibly spatially-varying surface materials.
We propose a coordinate-based deep reflectance (multilayer perceptron) to parameterize both the unknown 3D shape and the unknown at every surface point.
This network is able to leverage the observed photometric variance and shadows on the surface, and recover both surface shape and general non-Lambertian reflectance.
arXiv Detail & Related papers (2022-03-24T07:57:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.