Moving Object Detection for Event-based vision using Graph Spectral
Clustering
- URL: http://arxiv.org/abs/2109.14979v1
- Date: Thu, 30 Sep 2021 10:19:22 GMT
- Title: Moving Object Detection for Event-based vision using Graph Spectral
Clustering
- Authors: Anindya Mondal, Shashant R, Jhony H. Giraldo, Thierry Bouwmans, Ananda
S. Chowdhury
- Abstract summary: Moving object detection has been a central topic of discussion in computer vision for its wide range of applications.
We present an unsupervised Graph Spectral Clustering technique for Moving Object Detection in Event-based data.
We additionally show how the optimum number of moving objects can be automatically determined.
- Score: 6.354824287948164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Moving object detection has been a central topic of discussion in computer
vision for its wide range of applications like in self-driving cars, video
surveillance, security, and enforcement. Neuromorphic Vision Sensors (NVS) are
bio-inspired sensors that mimic the working of the human eye. Unlike
conventional frame-based cameras, these sensors capture a stream of
asynchronous 'events' that pose multiple advantages over the former, like high
dynamic range, low latency, low power consumption, and reduced motion blur.
However, these advantages come at a high cost, as the event camera data
typically contains more noise and has low resolution. Moreover, as event-based
cameras can only capture the relative changes in brightness of a scene, event
data do not contain usual visual information (like texture and color) as
available in video data from normal cameras. So, moving object detection in
event-based cameras becomes an extremely challenging task. In this paper, we
present an unsupervised Graph Spectral Clustering technique for Moving Object
Detection in Event-based data (GSCEventMOD). We additionally show how the
optimum number of moving objects can be automatically determined. Experimental
comparisons on publicly available datasets show that the proposed GSCEventMOD
algorithm outperforms a number of state-of-the-art techniques by a maximum
margin of 30%.
Related papers
- Distractor-aware Event-based Tracking [45.07711356111249]
We propose a distractor-aware event-based tracker that introduces transformer modules into Siamese network architecture (named DANet)
Our model is mainly composed of a motion-aware network and a target-aware network, which simultaneously exploits both motion cues and object contours from event data.
Our DANet can be trained in an end-to-end manner without any post-processing and can run at over 80 FPS on a single V100.
arXiv Detail & Related papers (2023-10-22T05:50:20Z) - SpikeMOT: Event-based Multi-Object Tracking with Sparse Motion Features [52.213656737672935]
SpikeMOT is an event-based multi-object tracker.
SpikeMOT uses spiking neural networks to extract sparsetemporal features from event streams associated with objects.
arXiv Detail & Related papers (2023-09-29T05:13:43Z) - Event-Free Moving Object Segmentation from Moving Ego Vehicle [88.33470650615162]
Moving object segmentation (MOS) in dynamic scenes is an important, challenging, but under-explored research topic for autonomous driving.
Most segmentation methods leverage motion cues obtained from optical flow maps.
We propose to exploit event cameras for better video understanding, which provide rich motion cues without relying on optical flow.
arXiv Detail & Related papers (2023-04-28T23:43:10Z) - Dual Memory Aggregation Network for Event-Based Object Detection with
Learnable Representation [79.02808071245634]
Event-based cameras are bio-inspired sensors that capture brightness change of every pixel in an asynchronous manner.
Event streams are divided into grids in the x-y-t coordinates for both positive and negative polarity, producing a set of pillars as 3D tensor representation.
Long memory is encoded in the hidden state of adaptive convLSTMs while short memory is modeled by computing spatial-temporal correlation between event pillars.
arXiv Detail & Related papers (2023-03-17T12:12:41Z) - PL-EVIO: Robust Monocular Event-based Visual Inertial Odometry with
Point and Line Features [3.6355269783970394]
Event cameras are motion-activated sensors that capture pixel-level illumination changes instead of the intensity image with a fixed frame rate.
We propose a robust, high-accurate, and real-time optimization-based monocular event-based visual-inertial odometry (VIO) method.
arXiv Detail & Related papers (2022-09-25T06:14:12Z) - Moving Object Detection for Event-based Vision using k-means Clustering [0.0]
Moving object detection is a crucial task in computer vision.
Event-based cameras are bio-inspired cameras that work by mimicking the working of the human eye.
In this paper, we investigate the application of the k-means clustering technique in detecting moving objects in event-based data.
arXiv Detail & Related papers (2021-09-04T14:43:14Z) - TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset [50.8779574716494]
Event cameras are bio-inspired vision sensors which measure per pixel brightness changes.
They offer numerous benefits over traditional, frame-based cameras, including low latency, high dynamic range, high temporal resolution and low power consumption.
To foster the development of 3D perception and navigation algorithms with event cameras, we present the TUM-VIE dataset.
arXiv Detail & Related papers (2021-08-16T19:53:56Z) - VisEvent: Reliable Object Tracking via Collaboration of Frame and Event
Flows [93.54888104118822]
We propose a large-scale Visible-Event benchmark (termed VisEvent) due to the lack of a realistic and scaled dataset for this task.
Our dataset consists of 820 video pairs captured under low illumination, high speed, and background clutter scenarios.
Based on VisEvent, we transform the event flows into event images and construct more than 30 baseline methods.
arXiv Detail & Related papers (2021-08-11T03:55:12Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z) - Learning to Detect Objects with a 1 Megapixel Event Camera [14.949946376335305]
Event cameras encode visual information with high temporal precision, low data-rate, and high-dynamic range.
Due to the novelty of the field, the performance of event-based systems on many vision tasks is still lower compared to conventional frame-based solutions.
arXiv Detail & Related papers (2020-09-28T16:03:59Z) - End-to-end Learning of Object Motion Estimation from Retinal Events for
Event-based Object Tracking [35.95703377642108]
We propose a novel deep neural network to learn and regress a parametric object-level motion/transform model for event-based object tracking.
To achieve this goal, we propose a synchronous Time-Surface with Linear Time Decay representation.
We feed the sequence of TSLTD frames to a novel Retinal Motion Regression Network (RMRNet) perform to an end-to-end 5-DoF object motion regression.
arXiv Detail & Related papers (2020-02-14T08:19:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.