Time-Ordered Recent Event (TORE) Volumes for Event Cameras
- URL: http://arxiv.org/abs/2103.06108v1
- Date: Wed, 10 Mar 2021 15:03:38 GMT
- Title: Time-Ordered Recent Event (TORE) Volumes for Event Cameras
- Authors: R. Wes Baldwin, Ruixu Liu, Mohammed Almatrafi, Vijayan Asari, Keigo
Hirakawa
- Abstract summary: Event cameras are an exciting, new sensor modality enabling high-speed imaging with extremely low-latency and wide dynamic range.
Most machine learning architectures are not designed to directly handle sparse data, like that generated from event cameras.
This paper details an event representation called Time-Ordered Recent Event (TORE) volumes. TORE volumes are designed to compactly store raw spike timing information with minimal information loss.
- Score: 21.419206807872797
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Event cameras are an exciting, new sensor modality enabling high-speed
imaging with extremely low-latency and wide dynamic range. Unfortunately, most
machine learning architectures are not designed to directly handle sparse data,
like that generated from event cameras. Many state-of-the-art algorithms for
event cameras rely on interpolated event representations - obscuring crucial
timing information, increasing the data volume, and limiting overall network
performance. This paper details an event representation called Time-Ordered
Recent Event (TORE) volumes. TORE volumes are designed to compactly store raw
spike timing information with minimal information loss. This bio-inspired
design is memory efficient, computationally fast, avoids time-blocking (i.e.
fixed and predefined frame rates), and contains "local memory" from past data.
The design is evaluated on a wide range of challenging tasks (e.g. event
denoising, image reconstruction, classification, and human pose estimation) and
is shown to dramatically improve state-of-the-art performance. TORE volumes are
an easy-to-implement replacement for any algorithm currently utilizing event
representations.
Related papers
- Gradient events: improved acquisition of visual information in event cameras [0.0]
We propose a new type of event, the gradient event, which benefits from the same properties as a conventional brightness event.
We show that the gradient event -based video reconstruction outperforms existing state-of-the-art brightness event -based methods by a significant margin.
arXiv Detail & Related papers (2024-09-03T10:18:35Z) - Graph-based Asynchronous Event Processing for Rapid Object Recognition [59.112755601918074]
Event cameras capture asynchronous events stream in which each event encodes pixel location, trigger time, and the polarity of the brightness changes.
We introduce a novel graph-based framework for event cameras, namely SlideGCN.
Our approach can efficiently process data event-by-event, unlock the low latency nature of events data while still maintaining the graph's structure internally.
arXiv Detail & Related papers (2023-08-28T08:59:57Z) - Deformable Convolutions and LSTM-based Flexible Event Frame Fusion
Network for Motion Deblurring [7.187030024676791]
Event cameras differ from conventional RGB cameras in that they produce asynchronous data sequences.
While RGB cameras capture every frame at a fixed rate, event cameras only capture changes in the scene, resulting in sparse and asynchronous data output.
Recent state-of-the-art CNN-based deblurring solutions produce multiple 2-D event frames based on the accumulation of event data over a time period.
It is particularly useful for scenarios in which exposure times vary depending on factors such as lighting conditions or the presence of fast-moving objects in the scene.
arXiv Detail & Related papers (2023-06-01T15:57:12Z) - Event Transformer+. A multi-purpose solution for efficient event data
processing [13.648678472312374]
Event cameras record sparse illumination changes with high temporal resolution and high dynamic range.
Current methods often ignore specific event-data properties, leading to the development of generic but computationally expensive algorithms.
We propose Event Transformer+, that improves our seminal work EvT with a refined patch-based event representation.
arXiv Detail & Related papers (2022-11-22T12:28:37Z) - Event-based Image Deblurring with Dynamic Motion Awareness [10.81953574179206]
We introduce the first dataset containing pairs of real RGB blur images and related events during the exposure time.
Our results show better robustness overall when using events, with improvements in PSNR by up to 1.57dB on synthetic data and 1.08 dB on real event data.
arXiv Detail & Related papers (2022-08-24T09:39:55Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - MEFNet: Multi-scale Event Fusion Network for Motion Deblurring [62.60878284671317]
Traditional frame-based cameras inevitably suffer from motion blur due to long exposure times.
As a kind of bio-inspired camera, the event camera records the intensity changes in an asynchronous way with high temporal resolution.
In this paper, we rethink the event-based image deblurring problem and unfold it into an end-to-end two-stage image restoration network.
arXiv Detail & Related papers (2021-11-30T23:18:35Z) - Bridging the Gap between Events and Frames through Unsupervised Domain
Adaptation [57.22705137545853]
We propose a task transfer method that allows models to be trained directly with labeled images and unlabeled event data.
We leverage the generative event model to split event features into content and motion features.
Our approach unlocks the vast amount of existing image datasets for the training of event-based neural networks.
arXiv Detail & Related papers (2021-09-06T17:31:37Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - Learning to Detect Objects with a 1 Megapixel Event Camera [14.949946376335305]
Event cameras encode visual information with high temporal precision, low data-rate, and high-dynamic range.
Due to the novelty of the field, the performance of event-based systems on many vision tasks is still lower compared to conventional frame-based solutions.
arXiv Detail & Related papers (2020-09-28T16:03:59Z) - EventSR: From Asynchronous Events to Image Reconstruction, Restoration,
and Super-Resolution via End-to-End Adversarial Learning [75.17497166510083]
Event cameras sense intensity changes and have many advantages over conventional cameras.
Some methods have been proposed to reconstruct intensity images from event streams.
The outputs are still in low resolution (LR), noisy, and unrealistic.
We propose a novel end-to-end pipeline that reconstructs LR images from event streams, enhances the image qualities and upsamples the enhanced images, called EventSR.
arXiv Detail & Related papers (2020-03-17T10:58:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.