AET-EFN: A Versatile Design for Static and Dynamic Event-Based Vision
- URL: http://arxiv.org/abs/2103.11645v1
- Date: Mon, 22 Mar 2021 08:09:03 GMT
- Title: AET-EFN: A Versatile Design for Static and Dynamic Event-Based Vision
- Authors: Chang Liu, Xiaojuan Qi, Edmund Lam, Ngai Wong
- Abstract summary: Event data are noisy, sparse, and nonuniform in the spatial-temporal domain with an extremely high temporal resolution.
Existing methods encode events into point-cloud-based or voxel-based representations, but suffer from noise and/or information loss.
This work proposes the Aligned Event Frame (AET) as a novel event data representation, and a neat framework called Event Frame Net (EFN)
The proposed AET and EFN are evaluated on various datasets, and proved to surpass existing state-of-the-art methods by large margins.
- Score: 33.4444564715323
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The neuromorphic event cameras, which capture the optical changes of a scene,
have drawn increasing attention due to their high speed and low power
consumption. However, the event data are noisy, sparse, and nonuniform in the
spatial-temporal domain with an extremely high temporal resolution, making it
challenging to design backend algorithms for event-based vision. Existing
methods encode events into point-cloud-based or voxel-based representations,
but suffer from noise and/or information loss. Additionally, there is little
research that systematically studies how to handle static and dynamic scenes
with one universal design for event-based vision. This work proposes the
Aligned Event Tensor (AET) as a novel event data representation, and a neat
framework called Event Frame Net (EFN), which enables our model for event-based
vision under static and dynamic scenes. The proposed AET and EFN are evaluated
on various datasets, and proved to surpass existing state-of-the-art methods by
large margins. Our method is also efficient and achieves the fastest inference
speed among others.
Related papers
- Dynamic Subframe Splitting and Spatio-Temporal Motion Entangled Sparse Attention for RGB-E Tracking [32.86991031493605]
Event-based bionic camera captures dynamic scenes with high temporal resolution and high dynamic range.
We propose a dynamic event subframe splitting strategy to split the event stream into more fine-grained event clusters.
Based on this, we design an event-based sparse attention mechanism to enhance the interaction of event features in temporal and spatial dimensions.
arXiv Detail & Related papers (2024-09-26T06:12:08Z) - Rethinking Efficient and Effective Point-based Networks for Event Camera Classification and Regression: EventMamba [11.400397931501338]
Event cameras efficiently detect changes in ambient light with low latency and high dynamic range while consuming minimal power.
Most current approach to processing event data often involves converting it into frame-based representations.
Point Cloud is a popular representation for 3D processing and is better suited to match the sparse and asynchronous nature of the event camera.
We propose EventMamba, an efficient and effective Point Cloud framework that achieves competitive results even compared to the state-of-the-art (SOTA) frame-based method.
arXiv Detail & Related papers (2024-05-09T21:47:46Z) - Fast Window-Based Event Denoising with Spatiotemporal Correlation
Enhancement [85.66867277156089]
We propose window-based event denoising, which simultaneously deals with a stack of events.
In spatial domain, we choose maximum a posteriori (MAP) to discriminate real-world event and noise.
Our algorithm can remove event noise effectively and efficiently and improve the performance of downstream tasks.
arXiv Detail & Related papers (2024-02-14T15:56:42Z) - Representation Learning on Event Stream via an Elastic Net-incorporated
Tensor Network [1.9515859963221267]
We present a novel representation method which can capture global correlations of all events in the event stream simultaneously.
Our method can achieve effective results in applications like filtering noise compared with the state-of-the-art methods.
arXiv Detail & Related papers (2024-01-16T02:51:47Z) - An Event-Oriented Diffusion-Refinement Method for Sparse Events
Completion [36.64856578682197]
Event cameras or dynamic vision sensors (DVS) record asynchronous response to brightness changes instead of conventional intensity frames.
We propose an inventive event completion sequence approach conforming to unique characteristics of event data in both the processing stage and the output form.
Specifically, we treat event streams as 3D event clouds in thetemporal domain, develop a diffusion-based generative model to generate dense clouds in a coarse-to-fine manner, and recover exact timestamps to maintain the temporal resolution of raw data successfully.
arXiv Detail & Related papers (2024-01-06T08:09:54Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - GET: Group Event Transformer for Event-Based Vision [82.312736707534]
Event cameras are a type of novel neuromorphic sen-sor that has been gaining increasing attention.
We propose a novel Group-based vision Transformer backbone for Event-based vision, called Group Event Transformer (GET)
GET de-couples temporal-polarity information from spatial infor-mation throughout the feature extraction process.
arXiv Detail & Related papers (2023-10-04T08:02:33Z) - EvDNeRF: Reconstructing Event Data with Dynamic Neural Radiance Fields [80.94515892378053]
EvDNeRF is a pipeline for generating event data and training an event-based dynamic NeRF.
NeRFs offer geometric-based learnable rendering, but prior work with events has only considered reconstruction of static scenes.
We show that by training on varied batch sizes of events, we can improve test-time predictions of events at fine time resolutions.
arXiv Detail & Related papers (2023-10-03T21:08:41Z) - Event-Free Moving Object Segmentation from Moving Ego Vehicle [88.33470650615162]
Moving object segmentation (MOS) in dynamic scenes is an important, challenging, but under-explored research topic for autonomous driving.
Most segmentation methods leverage motion cues obtained from optical flow maps.
We propose to exploit event cameras for better video understanding, which provide rich motion cues without relying on optical flow.
arXiv Detail & Related papers (2023-04-28T23:43:10Z) - Asynchronous Tracking-by-Detection on Adaptive Time Surfaces for
Event-based Object Tracking [87.0297771292994]
We propose an Event-based Tracking-by-Detection (ETD) method for generic bounding box-based object tracking.
To achieve this goal, we present an Adaptive Time-Surface with Linear Time Decay (ATSLTD) event-to-frame conversion algorithm.
We compare the proposed ETD method with seven popular object tracking methods, that are based on conventional cameras or event cameras, and two variants of ETD.
arXiv Detail & Related papers (2020-02-13T15:58:31Z) - A Differentiable Recurrent Surface for Asynchronous Event-Based Data [19.605628378366667]
We propose Matrix-LSTM, a grid of Long Short-Term Memory (LSTM) cells that efficiently process events and learn end-to-end task-dependent event-surfaces.
Compared to existing reconstruction approaches, our learned event-surface shows good flexibility and on optical flow estimation.
It improves the state-of-the-art of event-based object classification on the N-Cars dataset.
arXiv Detail & Related papers (2020-01-10T14:09:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.