Bridging the Gap between Events and Frames through Unsupervised Domain
Adaptation
- URL: http://arxiv.org/abs/2109.02618v1
- Date: Mon, 6 Sep 2021 17:31:37 GMT
- Title: Bridging the Gap between Events and Frames through Unsupervised Domain
Adaptation
- Authors: Nico Messikommer, Daniel Gehrig, Mathias Gehrig, Davide Scaramuzza
- Abstract summary: We propose a task transfer method that allows models to be trained directly with labeled images and unlabeled event data.
We leverage the generative event model to split event features into content and motion features.
Our approach unlocks the vast amount of existing image datasets for the training of event-based neural networks.
- Score: 57.22705137545853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras are novel sensors with outstanding properties such as high
temporal resolution and high dynamic range. Despite these characteristics,
event-based vision has been held back by the shortage of labeled datasets due
to the novelty of event cameras. To overcome this drawback, we propose a task
transfer method that allows models to be trained directly with labeled images
and unlabeled event data. Compared to previous approaches, (i) our method
transfers from single images to events instead of high frame rate videos, and
(ii) does not rely on paired sensor data. To achieve this, we leverage the
generative event model to split event features into content and motion
features. This feature split enables to efficiently match the latent space for
events and images, which is crucial for a successful task transfer. Thus, our
approach unlocks the vast amount of existing image datasets for the training of
event-based neural networks. Our task transfer method consistently outperforms
methods applicable in the Unsupervised Domain Adaptation setting for object
detection by 0.26 mAP (increase by 93%) and classification by 2.7% accuracy.
Related papers
- Event Camera Data Dense Pre-training [10.918407820258246]
This paper introduces a self-supervised learning framework designed for pre-training neural networks tailored to dense prediction tasks using event camera data.
For training our framework, we curate a synthetic event camera dataset featuring diverse scene and motion patterns.
arXiv Detail & Related papers (2023-11-20T04:36:19Z) - Dual Memory Aggregation Network for Event-Based Object Detection with
Learnable Representation [79.02808071245634]
Event-based cameras are bio-inspired sensors that capture brightness change of every pixel in an asynchronous manner.
Event streams are divided into grids in the x-y-t coordinates for both positive and negative polarity, producing a set of pillars as 3D tensor representation.
Long memory is encoded in the hidden state of adaptive convLSTMs while short memory is modeled by computing spatial-temporal correlation between event pillars.
arXiv Detail & Related papers (2023-03-17T12:12:41Z) - Masked Event Modeling: Self-Supervised Pretraining for Event Cameras [41.263606382601886]
Masked Event Modeling (MEM) is a self-supervised framework for events.
MEM pretrains a neural network on unlabeled events, which can originate from any event camera recording.
Our method reaches state-of-the-art classification accuracy across three datasets.
arXiv Detail & Related papers (2022-12-20T15:49:56Z) - Passive Non-line-of-sight Imaging for Moving Targets with an Event
Camera [0.0]
Non-line-of-sight (NLOS) imaging is an emerging technique for detecting objects behind obstacles or around corners.
Recent studies on passive NLOS mainly focus on steady-state measurement and reconstruction methods.
We propose a novel event-based passive NLOS imaging method.
arXiv Detail & Related papers (2022-09-27T10:56:14Z) - ESS: Learning Event-based Semantic Segmentation from Still Images [48.37422967330683]
Event-based semantic segmentation is still in its infancy due to the novelty of the sensor and the lack of high-quality, labeled datasets.
We introduce ESS, which transfers the semantic segmentation task from existing labeled image datasets to unlabeled events via unsupervised domain adaptation (UDA)
To spur further research in event-based semantic segmentation, we introduce DSEC-Semantic, the first large-scale event-based dataset with fine-grained labels.
arXiv Detail & Related papers (2022-03-18T15:30:01Z) - MEFNet: Multi-scale Event Fusion Network for Motion Deblurring [62.60878284671317]
Traditional frame-based cameras inevitably suffer from motion blur due to long exposure times.
As a kind of bio-inspired camera, the event camera records the intensity changes in an asynchronous way with high temporal resolution.
In this paper, we rethink the event-based image deblurring problem and unfold it into an end-to-end two-stage image restoration network.
arXiv Detail & Related papers (2021-11-30T23:18:35Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z) - Learning to Detect Objects with a 1 Megapixel Event Camera [14.949946376335305]
Event cameras encode visual information with high temporal precision, low data-rate, and high-dynamic range.
Due to the novelty of the field, the performance of event-based systems on many vision tasks is still lower compared to conventional frame-based solutions.
arXiv Detail & Related papers (2020-09-28T16:03:59Z) - Unsupervised Feature Learning for Event Data: Direct vs Inverse Problem
Formulation [53.850686395708905]
Event-based cameras record an asynchronous stream of per-pixel brightness changes.
In this paper, we focus on single-layer architectures for representation learning from event data.
We show improvements of up to 9 % in the recognition accuracy compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-09-23T10:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.