Un-EvMoSeg: Unsupervised Event-based Independent Motion Segmentation
- URL: http://arxiv.org/abs/2312.00114v1
- Date: Thu, 30 Nov 2023 18:59:32 GMT
- Title: Un-EvMoSeg: Unsupervised Event-based Independent Motion Segmentation
- Authors: Ziyun Wang, Jinyuan Guo, Kostas Daniilidis
- Abstract summary: Event cameras are a novel type of biologically inspired vision sensor known for their high temporal resolution, high dynamic range, and low power consumption.
We propose the first event framework that generates IMO pseudo-labels using geometric constraints.
Due to its unsupervised nature, our method can handle an arbitrary number of not predetermined objects and is easily scalable to datasets where expensive IMO labels are not readily available.
- Score: 33.21922177483246
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Event cameras are a novel type of biologically inspired vision sensor known
for their high temporal resolution, high dynamic range, and low power
consumption. Because of these properties, they are well-suited for processing
fast motions that require rapid reactions. Although event cameras have recently
shown competitive performance in unsupervised optical flow estimation,
performance in detecting independently moving objects (IMOs) is lacking behind,
although event-based methods would be suited for this task based on their low
latency and HDR properties. Previous approaches to event-based IMO segmentation
have been heavily dependent on labeled data. However, biological vision systems
have developed the ability to avoid moving objects through daily tasks without
being given explicit labels. In this work, we propose the first event framework
that generates IMO pseudo-labels using geometric constraints. Due to its
unsupervised nature, our method can handle an arbitrary number of not
predetermined objects and is easily scalable to datasets where expensive IMO
labels are not readily available. We evaluate our approach on the EVIMO dataset
and show that it performs competitively with supervised methods, both
quantitatively and qualitatively.
Related papers
- Motion Segmentation for Neuromorphic Aerial Surveillance [42.04157319642197]
Event cameras offer superior temporal resolution, superior dynamic range, and minimal power requirements.
Unlike traditional frame-based sensors that capture redundant information at fixed intervals, event cameras asynchronously record pixel-level brightness changes.
We introduce a novel motion segmentation method that leverages self-supervised vision transformers on both event data and optical flow information.
arXiv Detail & Related papers (2024-05-24T04:36:13Z) - Tracking-Assisted Object Detection with Event Cameras [16.408606403997005]
Event-based object detection has recently garnered attention in the computer vision community.
However, feature asynchronism and sparsity cause invisible objects due to no relative motion to the camera.
In this paper, we consider those invisible objects as pseudo-occluded objects.
We exploit tracking strategies for pseudo-occluded objects to maintain their permanence and retain their bounding boxes.
arXiv Detail & Related papers (2024-03-27T08:11:25Z) - SpikeMOT: Event-based Multi-Object Tracking with Sparse Motion Features [52.213656737672935]
SpikeMOT is an event-based multi-object tracker.
SpikeMOT uses spiking neural networks to extract sparsetemporal features from event streams associated with objects.
arXiv Detail & Related papers (2023-09-29T05:13:43Z) - FEDORA: Flying Event Dataset fOr Reactive behAvior [9.470870778715689]
Event-based sensors have emerged as low latency and low energy alternatives to standard frame-based cameras for capturing high-speed motion.
We present Flying Event dataset fOr Reactive behAviour (FEDORA) - a fully synthetic dataset for perception tasks.
arXiv Detail & Related papers (2023-05-22T22:59:05Z) - Event-Free Moving Object Segmentation from Moving Ego Vehicle [88.33470650615162]
Moving object segmentation (MOS) in dynamic scenes is an important, challenging, but under-explored research topic for autonomous driving.
Most segmentation methods leverage motion cues obtained from optical flow maps.
We propose to exploit event cameras for better video understanding, which provide rich motion cues without relying on optical flow.
arXiv Detail & Related papers (2023-04-28T23:43:10Z) - Event-based Simultaneous Localization and Mapping: A Comprehensive Survey [52.73728442921428]
Review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks.
Paper categorizes event-based vSLAM methods into four main categories: feature-based, direct, motion-compensation, and deep learning methods.
arXiv Detail & Related papers (2023-04-19T16:21:14Z) - ESS: Learning Event-based Semantic Segmentation from Still Images [48.37422967330683]
Event-based semantic segmentation is still in its infancy due to the novelty of the sensor and the lack of high-quality, labeled datasets.
We introduce ESS, which transfers the semantic segmentation task from existing labeled image datasets to unlabeled events via unsupervised domain adaptation (UDA)
To spur further research in event-based semantic segmentation, we introduce DSEC-Semantic, the first large-scale event-based dataset with fine-grained labels.
arXiv Detail & Related papers (2022-03-18T15:30:01Z) - Moving Object Detection for Event-based vision using Graph Spectral
Clustering [6.354824287948164]
Moving object detection has been a central topic of discussion in computer vision for its wide range of applications.
We present an unsupervised Graph Spectral Clustering technique for Moving Object Detection in Event-based data.
We additionally show how the optimum number of moving objects can be automatically determined.
arXiv Detail & Related papers (2021-09-30T10:19:22Z) - Bridging the Gap between Events and Frames through Unsupervised Domain
Adaptation [57.22705137545853]
We propose a task transfer method that allows models to be trained directly with labeled images and unlabeled event data.
We leverage the generative event model to split event features into content and motion features.
Our approach unlocks the vast amount of existing image datasets for the training of event-based neural networks.
arXiv Detail & Related papers (2021-09-06T17:31:37Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.