LSE-NeRF: Learning Sensor Modeling Errors for Deblured Neural Radiance Fields with RGB-Event Stereo
- URL: http://arxiv.org/abs/2409.06104v1
- Date: Mon, 9 Sep 2024 23:11:46 GMT
- Title: LSE-NeRF: Learning Sensor Modeling Errors for Deblured Neural Radiance Fields with RGB-Event Stereo
- Authors: Wei Zhi Tang, Daniel Rebain, Kostantinos G. Derpanis, Kwang Moo Yi,
- Abstract summary: We present a method for reconstructing a clear Neural Radiance Field (NeRF) even with fast camera motions.
We leverage both (blurry) RGB images and event camera data captured in a binocular configuration.
- Score: 14.792361875841095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a method for reconstructing a clear Neural Radiance Field (NeRF) even with fast camera motions. To address blur artifacts, we leverage both (blurry) RGB images and event camera data captured in a binocular configuration. Importantly, when reconstructing our clear NeRF, we consider the camera modeling imperfections that arise from the simple pinhole camera model as learned embeddings for each camera measurement, and further learn a mapper that connects event camera measurements with RGB data. As no previous dataset exists for our binocular setting, we introduce an event camera dataset with captures from a 3D-printed stereo configuration between RGB and event cameras. Empirically, we evaluate our introduced dataset and EVIMOv2 and show that our method leads to improved reconstructions. Our code and dataset are available at https://github.com/ubc-vision/LSENeRF.
Related papers
- Dynamic EventNeRF: Reconstructing General Dynamic Scenes from Multi-view Event Cameras [69.65147723239153]
Volumetric reconstruction of dynamic scenes is an important problem in computer vision.
It is especially challenging in poor lighting and with fast motion.
We propose the first method totemporally reconstruct a scene from sparse multi-view event streams and sparse RGB frames.
arXiv Detail & Related papers (2024-12-09T18:56:18Z) - SpikeNVS: Enhancing Novel View Synthesis from Blurry Images via Spike Camera [78.20482568602993]
Conventional RGB cameras are susceptible to motion blur.
Neuromorphic cameras like event and spike cameras inherently capture more comprehensive temporal information.
Our design can enhance novel view synthesis across NeRF and 3DGS.
arXiv Detail & Related papers (2024-04-10T03:31:32Z) - Complementing Event Streams and RGB Frames for Hand Mesh Reconstruction [51.87279764576998]
We propose EvRGBHand -- the first approach for 3D hand mesh reconstruction with an event camera and an RGB camera compensating for each other.
EvRGBHand can tackle overexposure and motion blur issues in RGB-based HMR and foreground scarcity and background overflow issues in event-based HMR.
arXiv Detail & Related papers (2024-03-12T06:04:50Z) - TUMTraf Event: Calibration and Fusion Resulting in a Dataset for
Roadside Event-Based and RGB Cameras [14.57694345706197]
Event-based cameras are predestined for Intelligent Transportation Systems (ITS)
They provide very high temporal resolution and dynamic range, which can eliminate motion blur and improve detection performance at night.
However, event-based images lack color and texture compared to images from a conventional RGB camera.
arXiv Detail & Related papers (2024-01-16T16:25:37Z) - IL-NeRF: Incremental Learning for Neural Radiance Fields with Camera
Pose Alignment [12.580584725107173]
We propose IL-NeRF, a novel framework for incremental NeRF training.
We show that IL-NeRF handles incremental NeRF training and outperforms the baselines by up to $54.04%$ in rendering quality.
arXiv Detail & Related papers (2023-12-10T04:12:27Z) - Deformable Neural Radiance Fields using RGB and Event Cameras [65.40527279809474]
We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
arXiv Detail & Related papers (2023-09-15T14:19:36Z) - Combining HoloLens with Instant-NeRFs: Advanced Real-Time 3D Mobile
Mapping [4.619828919345114]
We train a Neural Radiance Field (NeRF) as a neural scene representation in real-time with the acquired data from the HoloLens.
After the data stream ends, the training is stopped and the 3D reconstruction is initiated, which extracts a point cloud of the scene.
Our method of 3D reconstruction outperforms grid point sampling with NeRFs by multiple orders of magnitude.
arXiv Detail & Related papers (2023-04-27T16:07:21Z) - Event Fusion Photometric Stereo Network [3.0778023655689144]
We introduce a novel method to estimate surface normal of an object in an ambient light environment using RGB and event cameras.
This is the first study to use event cameras for photometric stereo in continuous light sources and ambient light environments.
arXiv Detail & Related papers (2023-03-01T08:13:26Z) - Self-Calibrating Neural Radiance Fields [68.64327335620708]
We jointly learn the geometry of the scene and the accurate camera parameters without any calibration objects.
Our camera model consists of a pinhole model, a fourth order radial distortion, and a generic noise model that can learn arbitrary non-linear camera distortions.
arXiv Detail & Related papers (2021-08-31T13:34:28Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.