Temporal Up-Sampling for Asynchronous Events
- URL: http://arxiv.org/abs/2208.08721v1
- Date: Thu, 18 Aug 2022 09:12:08 GMT
- Title: Temporal Up-Sampling for Asynchronous Events
- Authors: Xiang Xijie, Zhu lin, Li Jianing, Tian Yonghong and Huang Tiejun
- Abstract summary: In low-brightness or slow-moving scenes, events are often sparse and accompanied by noise.
We propose an event temporal up-sampling algorithm to generate more effective and reliable events.
Experimental results show that up-sampling events can provide more effective information and improve the performance of downstream tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The event camera is a novel bio-inspired vision sensor. When the brightness
change exceeds the preset threshold, the sensor generates events
asynchronously. The number of valid events directly affects the performance of
event-based tasks, such as reconstruction, detection, and recognition. However,
when in low-brightness or slow-moving scenes, events are often sparse and
accompanied by noise, which poses challenges for event-based tasks. To solve
these challenges, we propose an event temporal up-sampling algorithm1 to
generate more effective and reliable events. The main idea of our algorithm is
to generate up-sampling events on the event motion trajectory. First, we
estimate the event motion trajectory by contrast maximization algorithm and
then up-sampling the events by temporal point processes. Experimental results
show that up-sampling events can provide more effective information and improve
the performance of downstream tasks, such as improving the quality of
reconstructed images and increasing the accuracy of object detection.
Related papers
- Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - Dual Memory Aggregation Network for Event-Based Object Detection with
Learnable Representation [79.02808071245634]
Event-based cameras are bio-inspired sensors that capture brightness change of every pixel in an asynchronous manner.
Event streams are divided into grids in the x-y-t coordinates for both positive and negative polarity, producing a set of pillars as 3D tensor representation.
Long memory is encoded in the hidden state of adaptive convLSTMs while short memory is modeled by computing spatial-temporal correlation between event pillars.
arXiv Detail & Related papers (2023-03-17T12:12:41Z) - ASAP: Adaptive Scheme for Asynchronous Processing of Event-based Vision
Algorithms [0.2580765958706853]
Event cameras can capture pixel-level illumination changes with very high temporal resolution and dynamic range.
Two main approaches exist to feed the event-based processing algorithms: packaging the triggered events in event packages and sending them one-by-one as single events.
This paper presents ASAP, an adaptive scheme to manage the event stream through variable-size packages that accommodate the event package processing times.
arXiv Detail & Related papers (2022-09-18T16:28:29Z) - Event-based Image Deblurring with Dynamic Motion Awareness [10.81953574179206]
We introduce the first dataset containing pairs of real RGB blur images and related events during the exposure time.
Our results show better robustness overall when using events, with improvements in PSNR by up to 1.57dB on synthetic data and 1.08 dB on real event data.
arXiv Detail & Related papers (2022-08-24T09:39:55Z) - Unifying Event Detection and Captioning as Sequence Generation via
Pre-Training [53.613265415703815]
We propose a unified pre-training and fine-tuning framework to enhance the inter-task association between event detection and captioning.
Our model outperforms the state-of-the-art methods, and can be further boosted when pre-trained on extra large-scale video-text data.
arXiv Detail & Related papers (2022-07-18T14:18:13Z) - Ev-TTA: Test-Time Adaptation for Event-Based Object Recognition [7.814941658661939]
Ev-TTA is a simple, effective test-time adaptation for event-based object recognition.
Our formulation can be successfully applied regardless of input representations and extended into regression tasks.
arXiv Detail & Related papers (2022-03-23T07:43:44Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - Learning to Detect Objects with a 1 Megapixel Event Camera [14.949946376335305]
Event cameras encode visual information with high temporal precision, low data-rate, and high-dynamic range.
Due to the novelty of the field, the performance of event-based systems on many vision tasks is still lower compared to conventional frame-based solutions.
arXiv Detail & Related papers (2020-09-28T16:03:59Z) - Unsupervised Feature Learning for Event Data: Direct vs Inverse Problem
Formulation [53.850686395708905]
Event-based cameras record an asynchronous stream of per-pixel brightness changes.
In this paper, we focus on single-layer architectures for representation learning from event data.
We show improvements of up to 9 % in the recognition accuracy compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-09-23T10:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.