EventDrop: data augmentation for event-based learning
- URL: http://arxiv.org/abs/2106.05836v1
- Date: Mon, 7 Jun 2021 11:53:14 GMT
- Title: EventDrop: data augmentation for event-based learning
- Authors: Fuqiang Gu, Weicong Sng, Xuke Hu, Fangwen Yu
- Abstract summary: EventDrop is a new method for augmenting asynchronous event data to improve the generalization of deep models.
From a practical perspective, EventDrop is simple to implement and computationally low-cost.
- Score: 0.3670422696827526
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The advantages of event-sensing over conventional sensors (e.g., higher
dynamic range, lower time latency, and lower power consumption) have spurred
research into machine learning for event data. Unsurprisingly, deep learning
has emerged as a competitive methodology for learning with event sensors; in
typical setups, discrete and asynchronous events are first converted into
frame-like tensors on which standard deep networks can be applied. However,
over-fitting remains a challenge, particularly since event datasets remain
small relative to conventional datasets (e.g., ImageNet). In this paper, we
introduce EventDrop, a new method for augmenting asynchronous event data to
improve the generalization of deep models. By dropping events selected with
various strategies, we are able to increase the diversity of training data
(e.g., to simulate various levels of occlusion). From a practical perspective,
EventDrop is simple to implement and computationally low-cost. Experiments on
two event datasets (N-Caltech101 and N-Cars) demonstrate that EventDrop can
significantly improve the generalization performance across a variety of deep
networks.
Related papers
- Scalable Event-by-event Processing of Neuromorphic Sensory Signals With Deep State-Space Models [2.551844666707809]
Event-based sensors are well suited for real-time processing.
Current methods either collapse events into frames or cannot scale up when processing the event data directly event-by-event.
arXiv Detail & Related papers (2024-04-29T08:50:27Z) - Improving Event Definition Following For Zero-Shot Event Detection [66.27883872707523]
Existing approaches on zero-shot event detection usually train models on datasets annotated with known event types.
We aim to improve zero-shot event detection by training models to better follow event definitions.
arXiv Detail & Related papers (2024-03-05T01:46:50Z) - Event Stream-based Visual Object Tracking: A High-Resolution Benchmark
Dataset and A Novel Baseline [38.42400442371156]
Existing works either utilize aligned RGB and event data for accurate tracking or directly learn an event-based tracker.
We propose a novel hierarchical knowledge distillation framework that can fully utilize multi-modal / multi-view information during training to facilitate knowledge transfer.
We propose the first large-scale high-resolution ($1280 times 720$) dataset named EventVOT. It contains 1141 videos and covers a wide range of categories such as pedestrians, vehicles, UAVs, ping pongs, etc.
arXiv Detail & Related papers (2023-09-26T01:42:26Z) - EventBind: Learning a Unified Representation to Bind Them All for Event-based Open-world Understanding [7.797154022794006]
EventBind is a novel framework that unleashes the potential of vision-language models (VLMs) for event-based recognition.
We first introduce a novel event encoder that subtly models the temporal information from events.
We then design a text encoder that generates content prompts and utilizes hybrid text prompts to enhance EventBind's generalization ability.
arXiv Detail & Related papers (2023-08-06T15:05:42Z) - EventMix: An Efficient Augmentation Strategy for Event-Based Data [4.8416725611508244]
Event cameras can provide high dynamic range and low-energy event stream data.
The scale is smaller and more difficult to obtain than traditional frame-based data.
This paper proposes an efficient data augmentation strategy for event stream data: EventMix.
arXiv Detail & Related papers (2022-05-24T13:07:33Z) - PILED: An Identify-and-Localize Framework for Few-Shot Event Detection [79.66042333016478]
In our study, we employ cloze prompts to elicit event-related knowledge from pretrained language models.
We minimize the number of type-specific parameters, enabling our model to quickly adapt to event detection tasks for new types.
arXiv Detail & Related papers (2022-02-15T18:01:39Z) - Adversarial Attack for Asynchronous Event-based Data [0.19580473532948398]
We generate adversarial examples and then train the robust models for event-based data for the first time.
Our algorithm achieves an attack success rate of 97.95% on the N-Caltech101 dataset.
arXiv Detail & Related papers (2021-12-27T06:23:43Z) - Robust Event Classification Using Imperfect Real-world PMU Data [58.26737360525643]
We study robust event classification using imperfect real-world phasor measurement unit (PMU) data.
We develop a novel machine learning framework for training robust event classifiers.
arXiv Detail & Related papers (2021-10-19T17:41:43Z) - Learning Constraints and Descriptive Segmentation for Subevent Detection [74.48201657623218]
We propose an approach to learning and enforcing constraints that capture dependencies between subevent detection and EventSeg prediction.
We adopt Rectifier Networks for constraint learning and then convert the learned constraints to a regularization term in the loss function of the neural model.
arXiv Detail & Related papers (2021-09-13T20:50:37Z) - Event-Related Bias Removal for Real-time Disaster Events [67.2965372987723]
Social media has become an important tool to share information about crisis events such as natural disasters and mass attacks.
Detecting actionable posts that contain useful information requires rapid analysis of huge volume of data in real-time.
We train an adversarial neural model to remove latent event-specific biases and improve the performance on tweet importance classification.
arXiv Detail & Related papers (2020-11-02T02:03:07Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.