UnReflectAnything: RGB-Only Highlight Removal by Rendering Synthetic Specular Supervision
- URL: http://arxiv.org/abs/2512.09583v2
- Date: Thu, 11 Dec 2025 15:21:30 GMT
- Title: UnReflectAnything: RGB-Only Highlight Removal by Rendering Synthetic Specular Supervision
- Authors: Alberto Rota, Mert Kiray, Mert Asim Karaoglu, Patrick Ruhkamp, Elena De Momi, Nassir Navab, Benjamin Busam,
- Abstract summary: We present UnReflectAnything, an RGB-only framework that removes highlights from a single image.<n>It predicts a highlight map together with a reflection-free diffuse reconstruction.<n>It generalizes across natural and surgical domains where non-Lambertian surfaces and non-uniform lighting create severe highlights.
- Score: 51.72020507506023
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Specular highlights distort appearance, obscure texture, and hinder geometric reasoning in both natural and surgical imagery. We present UnReflectAnything, an RGB-only framework that removes highlights from a single image by predicting a highlight map together with a reflection-free diffuse reconstruction. The model uses a frozen vision transformer encoder to extract multi-scale features, a lightweight head to localize specular regions, and a token-level inpainting module that restores corrupted feature patches before producing the final diffuse image. To overcome the lack of paired supervision, we introduce a Virtual Highlight Synthesis pipeline that renders physically plausible specularities using monocular geometry, Fresnel-aware shading, and randomized lighting which enables training on arbitrary RGB images with correct geometric structure. UnReflectAnything generalizes across natural and surgical domains where non-Lambertian surfaces and non-uniform lighting create severe highlights and it achieves competitive performance with state-of-the-art results on several benchmarks. Project Page: https://alberto-rota.github.io/UnReflectAnything/
Related papers
- Reflections Unlock: Geometry-Aware Reflection Disentanglement in 3D Gaussian Splatting for Photorealistic Scenes Rendering [51.223347330075576]
Ref-Unlock is a novel geometry-aware reflection modeling framework based on 3D Gaussian Splatting.<n>Our approach employs a dual-branch representation with high-order spherical harmonics to capture high-frequency reflective details.<n>Our method thus offers an efficient and generalizable solution for realistic rendering of reflective scenes.
arXiv Detail & Related papers (2025-07-08T15:45:08Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - Monocular Identity-Conditioned Facial Reflectance Reconstruction [71.90507628715388]
Existing methods rely on a large amount of light-stage captured data to learn facial reflectance models.
We learn the reflectance prior in image space rather than UV space and present a framework named ID2Reflectance.
Our framework can directly estimate the reflectance maps of a single image while using limited reflectance data for training.
arXiv Detail & Related papers (2024-03-30T09:43:40Z) - NeRRF: 3D Reconstruction and View Synthesis for Transparent and Specular
Objects with Neural Refractive-Reflective Fields [23.099784003061618]
We introduce the refractive-reflective field to Neural radiance fields (NeRF)
NeRF uses straight rays and fails to deal with complicated light path changes caused by refraction and reflection.
We propose a virtual cone supersampling technique to achieve efficient and effective anti-aliasing.
arXiv Detail & Related papers (2023-09-22T17:59:12Z) - NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from
Multiview Images [44.1333444097976]
We present a neural rendering-based method called NeRO for reconstructing the geometry and the BRDF of reflective objects from multiview images captured in an unknown environment.
arXiv Detail & Related papers (2023-05-27T07:40:07Z) - Relightify: Relightable 3D Faces from a Single Image via Diffusion
Models [86.3927548091627]
We present the first approach to use diffusion models as a prior for highly accurate 3D facial BRDF reconstruction from a single image.
In contrast to existing methods, we directly acquire the observed texture from the input image, thus, resulting in more faithful and consistent estimation.
arXiv Detail & Related papers (2023-05-10T11:57:49Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.