Ev-TTA: Test-Time Adaptation for Event-Based Object Recognition
- URL: http://arxiv.org/abs/2203.12247v1
- Date: Wed, 23 Mar 2022 07:43:44 GMT
- Title: Ev-TTA: Test-Time Adaptation for Event-Based Object Recognition
- Authors: Junho Kim, Inwoo Hwang, and Young Min Kim
- Abstract summary: Ev-TTA is a simple, effective test-time adaptation for event-based object recognition.
Our formulation can be successfully applied regardless of input representations and extended into regression tasks.
- Score: 7.814941658661939
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce Ev-TTA, a simple, effective test-time adaptation algorithm for
event-based object recognition. While event cameras are proposed to provide
measurements of scenes with fast motions or drastic illumination changes, many
existing event-based recognition algorithms suffer from performance
deterioration under extreme conditions due to significant domain shifts. Ev-TTA
mitigates the severe domain gaps by fine-tuning the pre-trained classifiers
during the test phase using loss functions inspired by the spatio-temporal
characteristics of events. Since the event data is a temporal stream of
measurements, our loss function enforces similar predictions for adjacent
events to quickly adapt to the changed environment online. Also, we utilize the
spatial correlations between two polarities of events to handle noise under
extreme illumination, where different polarities of events exhibit distinctive
noise distributions. Ev-TTA demonstrates a large amount of performance gain on
a wide range of event-based object recognition tasks without extensive
additional training. Our formulation can be successfully applied regardless of
input representations and further extended into regression tasks. We expect
Ev-TTA to provide the key technique to deploy event-based vision algorithms in
challenging real-world applications where significant domain shift is
inevitable.
Related papers
- Path-adaptive Spatio-Temporal State Space Model for Event-based Recognition with Arbitrary Duration [9.547947845734992]
Event cameras are bio-inspired sensors that capture the intensity changes asynchronously and output event streams.
We present a novel framework, dubbed PAST-Act, exhibiting superior capacity in recognizing events with arbitrary duration.
We also build a minute-level event-based recognition dataset, named ArDVS100, with arbitrary duration for the benefit of the community.
arXiv Detail & Related papers (2024-09-25T14:08:37Z) - EventZoom: A Progressive Approach to Event-Based Data Augmentation for Enhanced Neuromorphic Vision [9.447299017563841]
Dynamic Vision Sensors (DVS) capture event data with high temporal resolution and low power consumption.
Event data augmentation serve as an essential method for overcoming the limitation of scale and diversity in event datasets.
arXiv Detail & Related papers (2024-05-29T08:39:31Z) - Fast Window-Based Event Denoising with Spatiotemporal Correlation
Enhancement [85.66867277156089]
We propose window-based event denoising, which simultaneously deals with a stack of events.
In spatial domain, we choose maximum a posteriori (MAP) to discriminate real-world event and noise.
Our algorithm can remove event noise effectively and efficiently and improve the performance of downstream tasks.
arXiv Detail & Related papers (2024-02-14T15:56:42Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - V2CE: Video to Continuous Events Simulator [1.1009908861287052]
We present a novel method for video-to-events stream conversion from multiple perspectives, considering the specific characteristics of Dynamic Vision Sensor (DVS)
A series of carefully designed timestamp losses helps enhance the quality of generated event voxels significantly.
We also propose a novel local dynamic-aware inference strategy to accurately recover event timestamps from event voxels in a continuous fashion.
arXiv Detail & Related papers (2023-09-16T06:06:53Z) - Abnormal Event Detection via Hypergraph Contrastive Learning [54.80429341415227]
Abnormal event detection plays an important role in many real applications.
In this paper, we study the unsupervised abnormal event detection problem in Attributed Heterogeneous Information Network.
A novel hypergraph contrastive learning method, named AEHCL, is proposed to fully capture abnormal event patterns.
arXiv Detail & Related papers (2023-04-02T08:23:20Z) - Temporal Up-Sampling for Asynchronous Events [0.0]
In low-brightness or slow-moving scenes, events are often sparse and accompanied by noise.
We propose an event temporal up-sampling algorithm to generate more effective and reliable events.
Experimental results show that up-sampling events can provide more effective information and improve the performance of downstream tasks.
arXiv Detail & Related papers (2022-08-18T09:12:08Z) - Learning Constraints and Descriptive Segmentation for Subevent Detection [74.48201657623218]
We propose an approach to learning and enforcing constraints that capture dependencies between subevent detection and EventSeg prediction.
We adopt Rectifier Networks for constraint learning and then convert the learned constraints to a regularization term in the loss function of the neural model.
arXiv Detail & Related papers (2021-09-13T20:50:37Z) - Unsupervised Domain Adaptation for Spatio-Temporal Action Localization [69.12982544509427]
S-temporal action localization is an important problem in computer vision.
We propose an end-to-end unsupervised domain adaptation algorithm.
We show that significant performance gain can be achieved when spatial and temporal features are adapted separately or jointly.
arXiv Detail & Related papers (2020-10-19T04:25:10Z) - A Background-Agnostic Framework with Adversarial Training for Abnormal
Event Detection in Video [120.18562044084678]
Abnormal event detection in video is a complex computer vision problem that has attracted significant attention in recent years.
We propose a background-agnostic framework that learns from training videos containing only normal events.
arXiv Detail & Related papers (2020-08-27T18:39:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.