Spike-NeRF: Neural Radiance Field Based On Spike Camera
- URL: http://arxiv.org/abs/2403.16410v1
- Date: Mon, 25 Mar 2024 04:05:23 GMT
- Title: Spike-NeRF: Neural Radiance Field Based On Spike Camera
- Authors: Yijia Guo, Yuanxi Bai, Liwen Hu, Mianzhi Liu, Ziyi Guo, Lei Ma, Tiejun Huang,
- Abstract summary: We propose Spike-NeRF, the first Neural Radiance Field derived from spike data.
Instead of the multi-view images at the same time of NeRF, the inputs of Spike-NeRF are continuous spike streams captured by a moving spike camera in a very short time.
Our results demonstrate that Spike-NeRF produces more visually appealing results than the existing methods and the baseline we proposed in high-speed scenes.
- Score: 24.829344089740303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a neuromorphic sensor with high temporal resolution, spike cameras offer notable advantages over traditional cameras in high-speed vision applications such as high-speed optical estimation, depth estimation, and object tracking. Inspired by the success of the spike camera, we proposed Spike-NeRF, the first Neural Radiance Field derived from spike data, to achieve 3D reconstruction and novel viewpoint synthesis of high-speed scenes. Instead of the multi-view images at the same time of NeRF, the inputs of Spike-NeRF are continuous spike streams captured by a moving spike camera in a very short time. To reconstruct a correct and stable 3D scene from high-frequency but unstable spike data, we devised spike masks along with a distinctive loss function. We evaluate our method qualitatively and numerically on several challenging synthetic scenes generated by blender with the spike camera simulator. Our results demonstrate that Spike-NeRF produces more visually appealing results than the existing methods and the baseline we proposed in high-speed scenes. Our code and data will be released soon.
Related papers
- SpikeGS: Learning 3D Gaussian Fields from Continuous Spike Stream [20.552076533208687]
A spike camera is a specialized high-speed visual sensor that offers advantages such as high temporal resolution and high dynamic range.
We introduce SpikeGS, the method to learn 3D Gaussian fields solely from spike stream.
Our method can reconstruct view synthesis results with fine texture details from a continuous spike stream captured by a moving spike camera.
arXiv Detail & Related papers (2024-09-23T16:28:41Z) - SpikeGS: 3D Gaussian Splatting from Spike Streams with High-Speed Camera Motion [46.23575738669567]
Novel View Synthesis plays a crucial role by generating new 2D renderings from multi-view images of 3D scenes.
High-frame-rate dense 3D reconstruction emerges as a vital technique, enabling detailed and accurate modeling of real-world objects or scenes.
Spike cameras, a novel type of neuromorphic sensor, continuously record scenes with an ultra-high temporal resolution.
arXiv Detail & Related papers (2024-07-14T03:19:30Z) - SpikeNVS: Enhancing Novel View Synthesis from Blurry Images via Spike Camera [78.20482568602993]
Conventional RGB cameras are susceptible to motion blur.
Neuromorphic cameras like event and spike cameras inherently capture more comprehensive temporal information.
Our design can enhance novel view synthesis across NeRF and 3DGS.
arXiv Detail & Related papers (2024-04-10T03:31:32Z) - SpikeNeRF: Learning Neural Radiance Fields from Continuous Spike Stream [26.165424006344267]
Spike cameras offer distinct advantages over standard cameras.
Existing approaches reliant on spike cameras often assume optimal illumination.
We introduce SpikeNeRF, the first work that derives a NeRF-based volumetric scene representation from spike camera data.
arXiv Detail & Related papers (2024-03-17T13:51:25Z) - Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion [67.15935067326662]
Event cameras offer low power, low latency, high temporal resolution and high dynamic range.
NeRF is seen as the leading candidate for efficient and effective scene representation.
We propose Robust e-NeRF, a novel method to directly and robustly reconstruct NeRFs from moving event cameras.
arXiv Detail & Related papers (2023-09-15T17:52:08Z) - Spike Stream Denoising via Spike Camera Simulation [64.11994763727631]
We propose a systematic noise model for spike camera based on its unique circuit.
The first benchmark for spike stream denoising is proposed which includes clear (noisy) spike stream.
Experiments show that DnSS has promising performance on the proposed benchmark.
arXiv Detail & Related papers (2023-04-06T14:59:48Z) - E-NeRF: Neural Radiance Fields from a Moving Event Camera [83.91656576631031]
Estimating neural radiance fields (NeRFs) from ideal images has been extensively studied in the computer vision community.
We present E-NeRF, the first method which estimates a volumetric scene representation in the form of a NeRF from a fast-moving event camera.
arXiv Detail & Related papers (2022-08-24T04:53:32Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.