SpikeCV: Open a Continuous Computer Vision Era
- URL: http://arxiv.org/abs/2303.11684v2
- Date: Tue, 28 May 2024 10:26:16 GMT
- Title: SpikeCV: Open a Continuous Computer Vision Era
- Authors: Yajing Zheng, Jiyuan Zhang, Rui Zhao, Jianhao Ding, Shiyan Chen, Ruiqin Xiong, Zhaofei Yu, Tiejun Huang,
- Abstract summary: SpikeCV is a new open-source computer vision platform for the spike camera.
The spike camera is a neuromorphic visual sensor that has developed rapidly in recent years.
SpikeCV provides a variety of ultra-high-speed scene datasets, hardware interfaces, and an easy-to-use modules library.
- Score: 56.0388584615134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: SpikeCV is a new open-source computer vision platform for the spike camera, which is a neuromorphic visual sensor that has developed rapidly in recent years. In the spike camera, each pixel position directly accumulates the light intensity and asynchronously fires spikes. The output binary spikes can reach a frequency of 40,000 Hz. As a new type of visual expression, spike sequence has high spatiotemporal completeness and preserves the continuous visual information of the external world. Taking advantage of the low latency and high dynamic range of the spike camera, many spike-based algorithms have made significant progress, such as high-quality imaging and ultra-high-speed target detection. To build up a community ecology for the spike vision to facilitate more users to take advantage of the spike camera, SpikeCV provides a variety of ultra-high-speed scene datasets, hardware interfaces, and an easy-to-use modules library. SpikeCV focuses on encapsulation for spike data, standardization for dataset interfaces, modularization for vision tasks, and real-time applications for challenging scenes. With the advent of the open-source Python ecosystem, modules of SpikeCV can be used as a Python library to fulfilled most of the numerical analysis needs of researchers. We demonstrate the efficiency of the SpikeCV on offline inference and real-time applications. The project repository address are \url{https://openi.pcl.ac.cn/Cordium/SpikeCV} and \url{https://github.com/Zyj061/SpikeCV
Related papers
- SpikeGS: Learning 3D Gaussian Fields from Continuous Spike Stream [20.552076533208687]
A spike camera is a specialized high-speed visual sensor that offers advantages such as high temporal resolution and high dynamic range.
We introduce SpikeGS, the method to learn 3D Gaussian fields solely from spike stream.
Our method can reconstruct view synthesis results with fine texture details from a continuous spike stream captured by a moving spike camera.
arXiv Detail & Related papers (2024-09-23T16:28:41Z) - SpikeNVS: Enhancing Novel View Synthesis from Blurry Images via Spike Camera [78.20482568602993]
Conventional RGB cameras are susceptible to motion blur.
Neuromorphic cameras like event and spike cameras inherently capture more comprehensive temporal information.
Our design can enhance novel view synthesis across NeRF and 3DGS.
arXiv Detail & Related papers (2024-04-10T03:31:32Z) - Spike-NeRF: Neural Radiance Field Based On Spike Camera [24.829344089740303]
We propose Spike-NeRF, the first Neural Radiance Field derived from spike data.
Instead of the multi-view images at the same time of NeRF, the inputs of Spike-NeRF are continuous spike streams captured by a moving spike camera in a very short time.
Our results demonstrate that Spike-NeRF produces more visually appealing results than the existing methods and the baseline we proposed in high-speed scenes.
arXiv Detail & Related papers (2024-03-25T04:05:23Z) - SpikeNeRF: Learning Neural Radiance Fields from Continuous Spike Stream [26.165424006344267]
Spike cameras offer distinct advantages over standard cameras.
Existing approaches reliant on spike cameras often assume optimal illumination.
We introduce SpikeNeRF, the first work that derives a NeRF-based volumetric scene representation from spike camera data.
arXiv Detail & Related papers (2024-03-17T13:51:25Z) - Finding Visual Saliency in Continuous Spike Stream [23.591309376586835]
In this paper, we investigate the visual saliency in the continuous spike stream for the first time.
We propose a Recurrent Spiking Transformer framework, which is based on a full spiking neural network.
Our framework exhibits a substantial margin of improvement in highlighting and capturing visual saliency in the spike stream.
arXiv Detail & Related papers (2024-03-10T15:15:35Z) - Recurrent Spike-based Image Restoration under General Illumination [21.630646894529065]
Spike camera is a new type of bio-inspired vision sensor that records light intensity in the form of a spike array with high temporal resolution (20,000 Hz)
Existing spike-based approaches typically assume that the scenes are with sufficient light intensity, which is usually unavailable in many real-world scenarios such as rainy days or dusk scenes.
We propose a Recurrent Spike-based Image Restoration (RSIR) network, which is the first work towards restoring clear images from spike arrays under general illumination.
arXiv Detail & Related papers (2023-08-06T04:24:28Z) - Spike Stream Denoising via Spike Camera Simulation [64.11994763727631]
We propose a systematic noise model for spike camera based on its unique circuit.
The first benchmark for spike stream denoising is proposed which includes clear (noisy) spike stream.
Experiments show that DnSS has promising performance on the proposed benchmark.
arXiv Detail & Related papers (2023-04-06T14:59:48Z) - USB: A Unified Semi-supervised Learning Benchmark [125.25384569880525]
Semi-supervised learning (SSL) improves model generalization by leveraging massive unlabeled data to augment limited labeled samples.
Previous work typically trains deep neural networks from scratch, which is time-consuming and environmentally unfriendly.
We construct a Unified SSL Benchmark (USB) by selecting 15 diverse, challenging, and comprehensive tasks.
arXiv Detail & Related papers (2022-08-12T15:45:48Z) - SCFlow: Optical Flow Estimation for Spiking Camera [50.770803466875364]
Spiking camera has enormous potential in real applications, especially for motion estimation in high-speed scenes.
Optical flow estimation has achieved remarkable success in image-based and event-based vision, but % existing methods cannot be directly applied in spike stream from spiking camera.
This paper presents, SCFlow, a novel deep learning pipeline for optical flow estimation for spiking camera.
arXiv Detail & Related papers (2021-10-08T06:16:45Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.