Interpolation-Based Event Visual Data Filtering Algorithms
- URL: http://arxiv.org/abs/2507.01557v1
- Date: Wed, 02 Jul 2025 10:13:20 GMT
- Title: Interpolation-Based Event Visual Data Filtering Algorithms
- Authors: Marcin Kowlaczyk, Tomasz Kryjak,
- Abstract summary: We propose a method for event data that is capable of removing approximately 99% of noise while preserving the majority of the valid signal.<n>The proposed methods use about 30KB of memory for a sensor with a resolution of 1280 x 720 and is therefore well suited for implementation in embedded devices.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The field of neuromorphic vision is developing rapidly, and event cameras are finding their way into more and more applications. However, the data stream from these sensors is characterised by significant noise. In this paper, we propose a method for event data that is capable of removing approximately 99\% of noise while preserving the majority of the valid signal. We have proposed four algorithms based on the matrix of infinite impulse response (IIR) filters method. We compared them on several event datasets that were further modified by adding artificially generated noise and noise recorded with dynamic vision sensor. The proposed methods use about 30KB of memory for a sensor with a resolution of 1280 x 720 and is therefore well suited for implementation in embedded devices.
Related papers
- High Throughput Event Filtering: The Interpolation-based DIF Algorithm Hardware Architecture [0.0]
We propose a hardware architecture of the Distance-based Interpolation with Frequency Weights filter and implement it on an FPGA chip.<n>Our architecture achieved a throughput of 403.39 million events per second for a sensor resolution of 1280 x 720 and 428.45 MEPS for a resolution of 640 x 480.<n>The average values of the Area Under the Receiver Operating Characteristic (AUROC) index ranged from 0.844 to 0.999 depending on the dataset.
arXiv Detail & Related papers (2025-06-06T07:49:18Z) - Synthesizing and Identifying Noise Levels in Autonomous Vehicle Camera Radar Datasets [2.435006380732195]
We create a realistic synthetic data augmentation pipeline for camera-radar Autonomous Vehicle datasets.<n>Our goal is to accurately simulate sensor failures and data deterioration due to real-world interferences.<n>We also present our results of a baseline lightweight Noise Recognition neural network trained and tested on our augmented dataset.
arXiv Detail & Related papers (2025-05-01T15:15:50Z) - Noise Filtering Benchmark for Neuromorphic Satellites Observations [39.781091151259766]
Event cameras capture sparse, asynchronous brightness changes which offer high temporal resolution, high dynamic range, low power consumption, and sparse data output.
These advantages make them ideal for Space Situational Awareness, particularly in detecting resident space objects moving within a telescope's field of view.
However, the output from event cameras often includes substantial background activity noise, which is known to be more prevalent in low-light conditions.
This noise can overwhelm the sparse events generated by satellite signals, making detection and tracking more challenging.
arXiv Detail & Related papers (2024-11-18T02:02:24Z) - Adaptive Domain Learning for Cross-domain Image Denoising [57.4030317607274]
We present a novel adaptive domain learning scheme for cross-domain image denoising.
We use existing data from different sensors (source domain) plus a small amount of data from the new sensor (target domain)
The ADL training scheme automatically removes the data in the source domain that are harmful to fine-tuning a model for the target domain.
Also, we introduce a modulation module to adopt sensor-specific information (sensor type and ISO) to understand input data for image denoising.
arXiv Detail & Related papers (2024-11-03T08:08:26Z) - bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction [57.199618102578576]
We propose bit2bit, a new method for reconstructing high-quality image stacks at original resolution from sparse binary quantatemporal image data.
Inspired by recent work on Poisson denoising, we developed an algorithm that creates a dense image sequence from sparse binary photon data.
We present a novel dataset containing a wide range of real SPAD high-speed videos under various challenging imaging conditions.
arXiv Detail & Related papers (2024-10-30T17:30:35Z) - Hardware architecture for high throughput event visual data filtering
with matrix of IIR filters algorithm [0.0]
Neuromorphic vision is a rapidly growing field with numerous applications in the perception systems of autonomous vehicles.
There is a significant amount of noise in the event stream due to the sensors working principle.
We present a novel algorithm based on an IIR filter matrix for filtering this type of noise and a hardware architecture that allows its acceleration.
arXiv Detail & Related papers (2022-07-02T15:18:53Z) - FOVEA: Foveated Image Magnification for Autonomous Navigation [53.69803081925454]
We propose an attentional approach that elastically magnifies certain regions while maintaining a small input canvas.
Our proposed method boosts the detection AP over standard Faster R-CNN, with and without finetuning.
On the autonomous driving datasets Argoverse-HD and BDD100K, we show our proposed method boosts the detection AP over standard Faster R-CNN, with and without finetuning.
arXiv Detail & Related papers (2021-08-27T03:07:55Z) - TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset [50.8779574716494]
Event cameras are bio-inspired vision sensors which measure per pixel brightness changes.
They offer numerous benefits over traditional, frame-based cameras, including low latency, high dynamic range, high temporal resolution and low power consumption.
To foster the development of 3D perception and navigation algorithms with event cameras, we present the TUM-VIE dataset.
arXiv Detail & Related papers (2021-08-16T19:53:56Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - EBBINNOT: A Hardware Efficient Hybrid Event-Frame Tracker for Stationary
Dynamic Vision Sensors [5.674895233111088]
This paper presents a hybrid event-frame approach for detecting and tracking objects recorded by a stationary neuromorphic sensor.
To exploit the background removal property of a static DVS, we propose an event-based binary image creation that signals presence or absence of events in a frame duration.
This is the first time a stationary DVS based traffic monitoring solution is extensively compared to simultaneously recorded RGB frame-based methods.
arXiv Detail & Related papers (2020-05-31T03:01:35Z) - Near-chip Dynamic Vision Filtering for Low-Bandwidth Pedestrian
Detection [99.94079901071163]
This paper presents a novel end-to-end system for pedestrian detection using Dynamic Vision Sensors (DVSs)
We target applications where multiple sensors transmit data to a local processing unit, which executes a detection algorithm.
Our detector is able to perform a detection every 450 ms, with an overall testing F1 score of 83%.
arXiv Detail & Related papers (2020-04-03T17:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.