A Novel Approach for Neuromorphic Vision Data Compression based on Deep
Belief Network
- URL: http://arxiv.org/abs/2210.15362v1
- Date: Thu, 27 Oct 2022 12:21:14 GMT
- Title: A Novel Approach for Neuromorphic Vision Data Compression based on Deep
Belief Network
- Authors: Sally Khaidem and Mansi Sharma and Abhipraay Nevatia
- Abstract summary: A neuromorphic camera is an image sensor that emulates the human eyes capturing only changes in local brightness levels.
This paper proposes a novel deep learning-based compression scheme for event data.
- Score: 0.2578242050187029
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A neuromorphic camera is an image sensor that emulates the human eyes
capturing only changes in local brightness levels. They are widely known as
event cameras, silicon retinas or dynamic vision sensors (DVS). DVS records
asynchronous per-pixel brightness changes, resulting in a stream of events that
encode the brightness change's time, location, and polarity. DVS consumes
little power and can capture a wider dynamic range with no motion blur and
higher temporal resolution than conventional frame-based cameras. Although this
method of event capture results in a lower bit rate than traditional video
capture, it is further compressible. This paper proposes a novel deep
learning-based compression scheme for event data. Using a deep belief network
(DBN), the high dimensional event data is reduced into a latent representation
and later encoded using an entropy-based coding technique. The proposed scheme
is among the first to incorporate deep learning for event compression. It
achieves a high compression ratio while maintaining good reconstruction quality
outperforming state-of-the-art event data coders and other lossless benchmark
techniques.
Related papers
- SpikeNVS: Enhancing Novel View Synthesis from Blurry Images via Spike Camera [78.20482568602993]
Conventional RGB cameras are susceptible to motion blur.
Neuromorphic cameras like event and spike cameras inherently capture more comprehensive temporal information.
Our design can enhance novel view synthesis across NeRF and 3DGS.
arXiv Detail & Related papers (2024-04-10T03:31:32Z) - Temporal-Mapping Photography for Event Cameras [5.344756442054121]
Event cameras, or Dynamic Vision Sensors (DVS), capture brightness changes as a continuous stream of "events"
Converting sparse events to dense intensity frames faithfully has long been an ill-posed problem.
In this paper, for the first time, we realize events to dense intensity image conversion using a stationary event camera in static scenes.
arXiv Detail & Related papers (2024-03-11T05:29:46Z) - Neural-based Compression Scheme for Solar Image Data [8.374518151411612]
We propose a neural network-based lossy compression method to be used in NASA's data-intensive imagery missions.
In this work, we propose an adversarially trained neural network, equipped with local and non-local attention modules to capture both the local and global structure of the image.
As a proof of concept for use of this algorithm in SDO data analysis, we have performed coronal hole (CH) detection using our compressed images.
arXiv Detail & Related papers (2023-11-06T04:13:58Z) - Convolutional Neural Network (CNN) to reduce construction loss in JPEG
compression caused by Discrete Fourier Transform (DFT) [0.0]
Convolutional Neural Networks (CNN) have received more attention than most other types of deep neural networks.
In this work, an effective image compression method is purposed using autoencoders.
arXiv Detail & Related papers (2022-08-26T12:46:16Z) - Analysis of the Effect of Low-Overhead Lossy Image Compression on the
Performance of Visual Crowd Counting for Smart City Applications [78.55896581882595]
Lossy image compression techniques can reduce the quality of the images, leading to accuracy degradation.
In this paper, we analyze the effect of applying low-overhead lossy image compression methods on the accuracy of visual crowd counting.
arXiv Detail & Related papers (2022-07-20T19:20:03Z) - COIN++: Data Agnostic Neural Compression [55.27113889737545]
COIN++ is a neural compression framework that seamlessly handles a wide range of data modalities.
We demonstrate the effectiveness of our method by compressing various data modalities.
arXiv Detail & Related papers (2022-01-30T20:12:04Z) - Enhanced Invertible Encoding for Learned Image Compression [40.21904131503064]
In this paper, we propose an enhanced Invertible.
Network with invertible neural networks (INNs) to largely mitigate the information loss problem for better compression.
Experimental results on the Kodak, CLIC, and Tecnick datasets show that our method outperforms the existing learned image compression methods.
arXiv Detail & Related papers (2021-08-08T17:32:10Z) - Combining Events and Frames using Recurrent Asynchronous Multimodal
Networks for Monocular Depth Prediction [51.072733683919246]
We introduce Recurrent Asynchronous Multimodal (RAM) networks to handle asynchronous and irregular data from multiple sensors.
Inspired by traditional RNNs, RAM networks maintain a hidden state that is updated asynchronously and can be queried at any time to generate a prediction.
We show an improvement over state-of-the-art methods by up to 30% in terms of mean depth absolute error.
arXiv Detail & Related papers (2021-02-18T13:24:35Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - Reducing the Sim-to-Real Gap for Event Cameras [64.89183456212069]
Event cameras are paradigm-shifting novel sensors that report asynchronous, per-pixel brightness changes called 'events' with unparalleled low latency.
Recent work has demonstrated impressive results using Convolutional Neural Networks (CNNs) for video reconstruction and optic flow with events.
We present strategies for improving training data for event based CNNs that result in 20-40% boost in performance of existing video reconstruction networks.
arXiv Detail & Related papers (2020-03-20T02:44:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.