On-the-go Reflectance Transformation Imaging with Ordinary Smartphones
- URL: http://arxiv.org/abs/2210.09821v1
- Date: Tue, 18 Oct 2022 13:00:22 GMT
- Title: On-the-go Reflectance Transformation Imaging with Ordinary Smartphones
- Authors: Mara Pistellato and Filippo Bergamasco
- Abstract summary: Reflectance Transformation Imaging (RTI) is a popular technique that allows the recovery of per-pixel reflectance information.
We propose a novel RTI method that can be carried out by recording videos with two ordinary smartphones.
- Score: 5.381004207943598
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Reflectance Transformation Imaging (RTI) is a popular technique that allows
the recovery of per-pixel reflectance information by capturing an object under
different light conditions. This can be later used to reveal surface details
and interactively relight the subject. Such process, however, typically
requires dedicated hardware setups to recover the light direction from multiple
locations, making the process tedious when performed outside the lab.
We propose a novel RTI method that can be carried out by recording videos
with two ordinary smartphones. The flash led-light of one device is used to
illuminate the subject while the other captures the reflectance. Since the led
is mounted close to the camera lenses, we can infer the light direction for
thousands of images by freely moving the illuminating device while observing a
fiducial marker surrounding the subject. To deal with such amount of data, we
propose a neural relighting model that reconstructs object appearance for
arbitrary light directions from extremely compact reflectance distribution data
compressed via Principal Components Analysis (PCA). Experiments shows that the
proposed technique can be easily performed on the field with a resulting RTI
model that can outperform state-of-the-art approaches involving dedicated
hardware setups.
Related papers
- IRIS: Inverse Rendering of Indoor Scenes from Low Dynamic Range Images [32.83096814910201]
We present a method that recovers the physically based material properties and lighting of a scene from multi-view, low-dynamic-range (LDR) images.
Our method outperforms existing methods taking LDR images as input, and allows for highly realistic relighting and object insertion.
arXiv Detail & Related papers (2024-01-23T18:59:56Z) - Improving Lens Flare Removal with General Purpose Pipeline and Multiple
Light Sources Recovery [69.71080926778413]
flare artifacts can affect image visual quality and downstream computer vision tasks.
Current methods do not consider automatic exposure and tone mapping in image signal processing pipeline.
We propose a solution to improve the performance of lens flare removal by revisiting the ISP and design a more reliable light sources recovery strategy.
arXiv Detail & Related papers (2023-08-31T04:58:17Z) - Factored-NeuS: Reconstructing Surfaces, Illumination, and Materials of
Possibly Glossy Objects [46.04357263321969]
We develop a method that recovers the surface, materials, and illumination of a scene from its posed multi-view images.
It does not require any additional data and can handle glossy objects or bright lighting.
arXiv Detail & Related papers (2023-05-29T07:44:19Z) - EverLight: Indoor-Outdoor Editable HDR Lighting Estimation [9.443561684223514]
We propose a method which combines a parametric light model with 360deg panoramas, ready to use as HDRI in rendering engines.
In our representation, users can easily edit light direction, intensity, number, etc. to impact shading while providing rich, complex reflections while seamlessly blending with the edits.
arXiv Detail & Related papers (2023-04-26T00:20:59Z) - WildLight: In-the-wild Inverse Rendering with a Flashlight [77.31815397135381]
We propose a practical photometric solution for in-the-wild inverse rendering under unknown ambient lighting.
Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone.
We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques.
arXiv Detail & Related papers (2023-03-24T17:59:56Z) - Bridge the Vision Gap from Field to Command: A Deep Learning Network
Enhancing Illumination and Details [17.25188250076639]
We propose a two-stream framework named NEID to tune up the brightness and enhance the details simultaneously.
The proposed method consists of three parts: Light Enhancement (LE), Detail Refinement (DR) and Feature Fusing (FF) module.
arXiv Detail & Related papers (2021-01-20T09:39:57Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.