Event-based Motion Segmentation with Spatio-Temporal Graph Cuts
- URL: http://arxiv.org/abs/2012.08730v2
- Date: Sat, 27 Mar 2021 00:08:17 GMT
- Title: Event-based Motion Segmentation with Spatio-Temporal Graph Cuts
- Authors: Yi Zhou, Guillermo Gallego, Xiuyuan Lu, Siqi Liu, and Shaojie Shen
- Abstract summary: We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
- Score: 51.17064599766138
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Identifying independently moving objects is an essential task for dynamic
scene understanding. However, traditional cameras used in dynamic scenes may
suffer from motion blur or exposure artifacts due to their sampling principle.
By contrast, event-based cameras are novel bio-inspired sensors that offer
advantages to overcome such limitations. They report pixel-wise intensity
changes asynchronously, which enables them to acquire visual information at
exactly the same rate as the scene dynamics. We have developed a method to
identify independently moving objects acquired with an event-based camera,
i.e., to solve the event-based motion segmentation problem. This paper
describes how to formulate the problem as a weakly-constrained multi-model
fitting one via energy minimization, and how to jointly solve its two
subproblems -- event-cluster assignment (labeling) and motion model fitting --
in an iterative manner, by exploiting the spatio-temporal structure of input
events in the form of a space-time graph. Experiments on available datasets
demonstrate the versatility of the method in scenes with different motion
patterns and number of moving objects. The evaluation shows that the method
performs on par or better than the state of the art without having to
predetermine the number of expected moving objects.
Related papers
- Motion Segmentation for Neuromorphic Aerial Surveillance [42.04157319642197]
Event cameras offer superior temporal resolution, superior dynamic range, and minimal power requirements.
Unlike traditional frame-based sensors that capture redundant information at fixed intervals, event cameras asynchronously record pixel-level brightness changes.
We introduce a novel motion segmentation method that leverages self-supervised vision transformers on both event data and optical flow information.
arXiv Detail & Related papers (2024-05-24T04:36:13Z) - Motion Segmentation from a Moving Monocular Camera [3.115818438802931]
We take advantage of two popular branches of monocular motion segmentation approaches: point trajectory based and optical flow based methods.
We are able to model various complex object motions in different scene structures at once.
Our method shows state-of-the-art performance on the KT3DMoSeg dataset.
arXiv Detail & Related papers (2023-09-24T22:59:05Z) - InstMove: Instance Motion for Object-centric Video Segmentation [70.16915119724757]
In this work, we study the instance-level motion and present InstMove, which stands for Instance Motion for Object-centric Video.
In comparison to pixel-wise motion, InstMove mainly relies on instance-level motion information that is free from image feature embeddings.
With only a few lines of code, InstMove can be integrated into current SOTA methods for three different video segmentation tasks.
arXiv Detail & Related papers (2023-03-14T17:58:44Z) - Event-based Motion Segmentation by Cascaded Two-Level Multi-Model
Fitting [44.97191206895915]
We present a cascaded two-level multi-model fitting method for identifying independently moving objects with a monocular event camera.
Experiments demonstrate the effectiveness and versatility of our method in real-world scenes with different motion patterns and an unknown number of moving objects.
arXiv Detail & Related papers (2021-11-05T12:59:41Z) - NeuralDiff: Segmenting 3D objects that move in egocentric videos [92.95176458079047]
We study the problem of decomposing the observed 3D scene into a static background and a dynamic foreground.
This task is reminiscent of the classic background subtraction problem, but is significantly harder because all parts of the scene, static and dynamic, generate a large apparent motion.
In particular, we consider egocentric videos and further separate the dynamic component into objects and the actor that observes and moves them.
arXiv Detail & Related papers (2021-10-19T12:51:35Z) - Attentive and Contrastive Learning for Joint Depth and Motion Field
Estimation [76.58256020932312]
Estimating the motion of the camera together with the 3D structure of the scene from a monocular vision system is a complex task.
We present a self-supervised learning framework for 3D object motion field estimation from monocular videos.
arXiv Detail & Related papers (2021-10-13T16:45:01Z) - Exposure Trajectory Recovery from Motion Blur [90.75092808213371]
Motion blur in dynamic scenes is an important yet challenging research topic.
In this paper, we define exposure trajectories, which represent the motion information contained in a blurry image.
A novel motion offset estimation framework is proposed to model pixel-wise displacements of the latent sharp image.
arXiv Detail & Related papers (2020-10-06T05:23:33Z) - 0-MMS: Zero-Shot Multi-Motion Segmentation With A Monocular Event Camera [13.39518293550118]
We present an approach for monocular multi-motion segmentation, which combines bottom-up feature tracking and top-down motion compensation into a unified pipeline.
Using the events within a time-interval, our method segments the scene into multiple motions by splitting and merging.
The approach was successfully evaluated on both challenging real-world and synthetic scenarios from the EV-IMO, EED, and MOD datasets.
arXiv Detail & Related papers (2020-06-11T02:34:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.