Event Fusion Photometric Stereo Network
- URL: http://arxiv.org/abs/2303.00308v1
- Date: Wed, 1 Mar 2023 08:13:26 GMT
- Title: Event Fusion Photometric Stereo Network
- Authors: Wonjeong Ryoo, Giljoo Nam, Jae-Sang Hyun, Sangpil Kim
- Abstract summary: We introduce a novel method to estimate surface normal of an object in an ambient light environment using RGB and event cameras.
This is the first study to use event cameras for photometric stereo in continuous light sources and ambient light environments.
- Score: 3.0778023655689144
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We introduce a novel method to estimate surface normal of an object in an
ambient light environment using RGB and event cameras. Modern photometric
stereo methods rely on RGB cameras in a darkroom to avoid ambient illumination.
To alleviate the limitations of using an RGB camera in a darkroom setting, we
utilize an event camera with high dynamic range and low latency by capturing
essential light information. This is the first study to use event cameras for
photometric stereo in continuous light sources and ambient light environments.
Additionally, we curate a new photometric stereo dataset captured by RGB and
event cameras under various ambient lights. Our proposed framework, Event
Fusion Photometric Stereo Network (EFPS-Net), estimates surface normals using
RGB frames and event signals. EFPS-Net outperforms state-of-the-art methods on
a real-world dataset with ambient lights, demonstrating the effectiveness of
incorporating additional modalities to alleviate limitations caused by ambient
illumination.
Related papers
- LSE-NeRF: Learning Sensor Modeling Errors for Deblured Neural Radiance Fields with RGB-Event Stereo [14.792361875841095]
We present a method for reconstructing a clear Neural Radiance Field (NeRF) even with fast camera motions.
We leverage both (blurry) RGB images and event camera data captured in a binocular configuration.
arXiv Detail & Related papers (2024-09-09T23:11:46Z) - LIPIDS: Learning-based Illumination Planning In Discretized (Light) Space for Photometric Stereo [19.021200954913475]
Photometric stereo is a powerful method for obtaining per-pixel surface normals from differently illuminated images of an object.
Finding an optimal configuration is challenging due to the vast number of possible lighting directions.
We introduce LIPIDS - Learning-based Illumination Planning In Discretized light Space.
arXiv Detail & Related papers (2024-09-01T09:54:16Z) - Seeing Motion at Nighttime with an Event Camera [17.355331119296782]
Event cameras react to dynamic with higher temporal resolution (microsecond) and higher dynamic range (120dB)
We propose a nighttime event reconstruction network (NER-Net) which mainly includes a learnable event timestamps calibration module (LETC)
We construct a paired real-light event dataset (RLED) through a co-axial imaging, including 64,200 spatially and temporally aligned image GTs and low-light events.
arXiv Detail & Related papers (2024-04-18T03:58:27Z) - NIR-Assisted Image Denoising: A Selective Fusion Approach and A Real-World Benchmark Dataset [53.79524776100983]
Leveraging near-infrared (NIR) images to assist visible RGB image denoising shows the potential to address this issue.
Existing works still struggle with taking advantage of NIR information effectively for real-world image denoising.
We propose an efficient Selective Fusion Module (SFM), which can be plug-and-played into the advanced denoising networks.
arXiv Detail & Related papers (2024-04-12T14:54:26Z) - Complementing Event Streams and RGB Frames for Hand Mesh Reconstruction [51.87279764576998]
We propose EvRGBHand -- the first approach for 3D hand mesh reconstruction with an event camera and an RGB camera compensating for each other.
EvRGBHand can tackle overexposure and motion blur issues in RGB-based HMR and foreground scarcity and background overflow issues in event-based HMR.
arXiv Detail & Related papers (2024-03-12T06:04:50Z) - Chasing Day and Night: Towards Robust and Efficient All-Day Object Detection Guided by an Event Camera [8.673063170884591]
EOLO is a novel object detection framework that achieves robust and efficient all-day detection by fusing both RGB and event modalities.
Our EOLO framework is built based on a lightweight spiking neural network (SNN) to efficiently leverage the asynchronous property of events.
arXiv Detail & Related papers (2023-09-17T15:14:01Z) - Deformable Neural Radiance Fields using RGB and Event Cameras [65.40527279809474]
We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
arXiv Detail & Related papers (2023-09-15T14:19:36Z) - WildLight: In-the-wild Inverse Rendering with a Flashlight [77.31815397135381]
We propose a practical photometric solution for in-the-wild inverse rendering under unknown ambient lighting.
Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone.
We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques.
arXiv Detail & Related papers (2023-03-24T17:59:56Z) - Seeing Through The Noisy Dark: Toward Real-world Low-Light Image
Enhancement and Denoising [125.56062454927755]
Real-world low-light environment usually suffer from lower visibility and heavier noise, due to insufficient light or hardware limitation.
We propose a novel end-to-end method termed Real-world Low-light Enhancement & Denoising Network (RLED-Net)
arXiv Detail & Related papers (2022-10-02T14:57:23Z) - Spatially and color consistent environment lighting estimation using
deep neural networks for mixed reality [1.1470070927586016]
This paper presents a CNN-based model to estimate complex lighting for mixed reality environments.
We propose a new CNN architecture that inputs an RGB image and recognizes, in real-time, the environment lighting.
We show in experiments that the CNN architecture can predict the environment lighting with an average mean squared error (MSE) of num7.85e-04 when comparing SH lighting coefficients.
arXiv Detail & Related papers (2021-08-17T23:03:55Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.