Improved Regularization of Event-based Learning by Reversing and
Drifting
- URL: http://arxiv.org/abs/2207.11659v1
- Date: Sun, 24 Jul 2022 04:23:56 GMT
- Title: Improved Regularization of Event-based Learning by Reversing and
Drifting
- Authors: Haibo Shen, Yihao Luo, Xiang Cao, Liangqi Zhang, Juyu Xiao, Tianjiang
Wang
- Abstract summary: Event camera has enormous potential in challenging scenes for its advantages of high temporal resolution, high dynamic range, low power consumption, and no motion blur.
We propose two novel augmentation methods: EventReverse and EventDrift.
- Score: 4.736525128377909
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Event camera has an enormous potential in challenging scenes for its
advantages of high temporal resolution, high dynamic range, low power
consumption, and no motion blur. However, event-based learning is hindered by
insufficient generalization ability. In this paper, we first analyze the
influence of different brightness variations on event data. Then we propose two
novel augmentation methods: EventReverse and EventDrift. By reversing and
drifting events to their corresponding positions in the spatiotemporal or
polarity domain, the proposed methods generate samples affected by different
brightness variations, which improves the robustness of event-based learning
and results in a better generalization. Extensive experiments on N-CARS,
N-Caltech101 and CIFAR10-DVS datasets demonstrate that our method is general
and remarkably effective.
Related papers
- E2VIDiff: Perceptual Events-to-Video Reconstruction using Diffusion Priors [44.430588804079555]
We introduce diffusion models to events-to-video reconstruction, achieving colorful, realistic, and perceptually superior video generation from achromatic events.
Our approach can produce diverse, realistic frames with faithfulness to the given events.
arXiv Detail & Related papers (2024-07-11T07:10:58Z) - EventZoom: A Progressive Approach to Event-Based Data Augmentation for Enhanced Neuromorphic Vision [9.447299017563841]
Dynamic Vision Sensors (DVS) capture event data with high temporal resolution and low power consumption.
Event data augmentation serve as an essential method for overcoming the limitation of scale and diversity in event datasets.
arXiv Detail & Related papers (2024-05-29T08:39:31Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - Deformable Neural Radiance Fields using RGB and Event Cameras [65.40527279809474]
We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
arXiv Detail & Related papers (2023-09-15T14:19:36Z) - Generalizing Event-Based Motion Deblurring in Real-World Scenarios [62.995994797897424]
Event-based motion deblurring has shown promising results by exploiting low-latency events.
We propose a scale-aware network that allows flexible input spatial scales and enables learning from different temporal scales of motion blur.
A two-stage self-supervised learning scheme is then developed to fit real-world data distribution.
arXiv Detail & Related papers (2023-08-11T04:27:29Z) - Event-based Simultaneous Localization and Mapping: A Comprehensive Survey [52.73728442921428]
Review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks.
Paper categorizes event-based vSLAM methods into four main categories: feature-based, direct, motion-compensation, and deep learning methods.
arXiv Detail & Related papers (2023-04-19T16:21:14Z) - Ev-TTA: Test-Time Adaptation for Event-Based Object Recognition [7.814941658661939]
Ev-TTA is a simple, effective test-time adaptation for event-based object recognition.
Our formulation can be successfully applied regardless of input representations and extended into regression tasks.
arXiv Detail & Related papers (2022-03-23T07:43:44Z) - EventDrop: data augmentation for event-based learning [0.3670422696827526]
EventDrop is a new method for augmenting asynchronous event data to improve the generalization of deep models.
From a practical perspective, EventDrop is simple to implement and computationally low-cost.
arXiv Detail & Related papers (2021-06-07T11:53:14Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - Unsupervised Feature Learning for Event Data: Direct vs Inverse Problem
Formulation [53.850686395708905]
Event-based cameras record an asynchronous stream of per-pixel brightness changes.
In this paper, we focus on single-layer architectures for representation learning from event data.
We show improvements of up to 9 % in the recognition accuracy compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-09-23T10:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.