PhaSR: Generalized Image Shadow Removal with Physically Aligned Priors
- URL: http://arxiv.org/abs/2601.17470v2
- Date: Sat, 31 Jan 2026 02:36:42 GMT
- Title: PhaSR: Generalized Image Shadow Removal with Physically Aligned Priors
- Authors: Chia-Ming Lee, Yu-Fan Lin, Yu-Jou Hsiao, Jing-Hui Jung, Yu-Lun Liu, Chih-Chung Hsu,
- Abstract summary: We propose PhaSR (Physically Aligned Shadow Removal), addressing this through dual-level prior alignment.<n>Experiments show competitive performance in shadow removal with lower complexity and generalization to ambient lighting.
- Score: 13.290464696196366
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Shadow removal under diverse lighting conditions requires disentangling illumination from intrinsic reflectance, a challenge compounded when physical priors are not properly aligned. We propose PhaSR (Physically Aligned Shadow Removal), addressing this through dual-level prior alignment to enable robust performance from single-light shadows to multi-source ambient lighting. First, Physically Aligned Normalization (PAN) performs closed-form illumination correction via Gray-world normalization, log-domain Retinex decomposition, and dynamic range recombination, suppressing chromatic bias. Second, Geometric-Semantic Rectification Attention (GSRA) extends differential attention to cross-modal alignment, harmonizing depth-derived geometry with DINO-v2 semantic embeddings to resolve modal conflicts under varying illumination. Experiments show competitive performance in shadow removal with lower complexity and generalization to ambient lighting where traditional methods fail under multi-source illumination. Our source code is available at https://github.com/ming053l/PhaSR.
Related papers
- Joint Shadow Generation and Relighting via Light-Geometry Interaction Maps [51.82696819319878]
We propose Light-Geometry Interaction maps, a novel representation that encodes light-aware occlusion from monocular depth.<n>LGI captures essential light-shadow interactions reliably and accurately, computed from off-the-shelf 2.5D depth map predictions.<n>By embedding LGI into a bridge-matching generative backbone, we reduce ambiguity and enforce physically consistent light-shadow reasoning.
arXiv Detail & Related papers (2026-02-25T11:47:26Z) - Multi-scale Attention-Guided Intrinsic Decomposition and Rendering Pass Prediction for Facial Images [0.0]
This paper introduces MAGINet, a Multi-scale Attention-Guided Intrinsics Network that predicts a light-normalized diffuse albedo map from a single RGB portrait.<n>The pipeline achieves state-of-the-art performance for diffuse albedo estimation and demonstrates significantly improved fidelity for the complete rendering stack compared to prior methods.
arXiv Detail & Related papers (2025-12-18T13:23:49Z) - GLOW: Global Illumination-Aware Inverse Rendering of Indoor Scenes Captured with Dynamic Co-Located Light & Camera [18.90141473604964]
Inverse rendering of indoor scenes remains challenging due to the ambiguity between reflectance and lighting.<n>We present GLOW, a Global Illumination-aware Inverse Rendering framework designed to address these challenges.
arXiv Detail & Related papers (2025-11-28T03:24:12Z) - MaterialRefGS: Reflective Gaussian Splatting with Multi-view Consistent Material Inference [83.38607296779423]
We show that multi-view consistent material inference with more physically-based environment modeling is key to learning accurate reflections with Gaussian Splatting.<n>Our method faithfully recovers both illumination and geometry, achieving state-of-the-art rendering quality in novel views synthesis.
arXiv Detail & Related papers (2025-10-13T13:29:20Z) - SAIGFormer: A Spatially-Adaptive Illumination-Guided Network for Low-Light Image Enhancement [58.79901582809091]
Recent Transformer-based low-light enhancement methods have made promising progress in recovering global illumination.<n>Recent Transformer-based low-light enhancement methods have made promising progress in recovering global illumination.<n>We present a Spatially-Adaptive Illumination-Guided Transformer framework that enables accurate illumination restoration.
arXiv Detail & Related papers (2025-07-21T11:38:56Z) - Light of Normals: Unified Feature Representation for Universal Photometric Stereo [69.95514862547174]
Current encoders cannot guarantee that illumination and normal information are decoupled.<n>We introduce LINO UniPS with two key components: (i) Light Register Tokens with light alignment supervision to aggregate point, direction, and environment lights.<n>We also introduce PS-Verse, a large-scale synthetic dataset graded by geometric complexity and lighting diversity.
arXiv Detail & Related papers (2025-06-23T17:53:11Z) - GS-ID: Illumination Decomposition on Gaussian Splatting via Adaptive Light Aggregation and Diffusion-Guided Material Priors [5.7153963416911]
Gaussian Splatting (GS) has emerged as an effective representation for rendering, but the underlying geometry, material, and lighting remain entangled.<n>We propose textbfGS-ID, an end-to-end framework for illumination decomposition.<n> Experiments demonstrate the effectiveness of GS-ID for downstream applications such as relighting and scene composition.
arXiv Detail & Related papers (2024-08-16T04:38:31Z) - Towards Image Ambient Lighting Normalization [47.42834070783831]
Ambient Lighting Normalization (ALN) enables the study of interactions between shadows, unifying image restoration and shadow removal in a broader context.
For benchmarking, we select various mainstream methods and rigorously evaluate them on Ambient6K.
Experiments show that IFBlend achieves SOTA scores on Ambient6K and exhibits competitive performance on conventional shadow removal benchmarks.
arXiv Detail & Related papers (2024-03-27T16:20:55Z) - Diving into Darkness: A Dual-Modulated Framework for High-Fidelity
Super-Resolution in Ultra-Dark Environments [51.58771256128329]
This paper proposes a specialized dual-modulated learning framework that attempts to deeply dissect the nature of the low-light super-resolution task.
We develop Illuminance-Semantic Dual Modulation (ISDM) components to enhance feature-level preservation of illumination and color details.
Comprehensive experiments showcases the applicability and generalizability of our approach to diverse and challenging ultra-low-light conditions.
arXiv Detail & Related papers (2023-09-11T06:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.