Self-supervised Sparse to Dense Motion Segmentation
- URL: http://arxiv.org/abs/2008.07872v1
- Date: Tue, 18 Aug 2020 11:40:18 GMT
- Title: Self-supervised Sparse to Dense Motion Segmentation
- Authors: Amirhossein Kardoost, Kalun Ho, Peter Ochs, Margret Keuper
- Abstract summary: We propose a self supervised method to learn the densification of sparse motion segmentations from single video frames.
We evaluate our method on the well-known motion segmentation datasets FBMS59 and DAVIS16.
- Score: 13.888344214818737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Observable motion in videos can give rise to the definition of objects moving
with respect to the scene. The task of segmenting such moving objects is
referred to as motion segmentation and is usually tackled either by aggregating
motion information in long, sparse point trajectories, or by directly producing
per frame dense segmentations relying on large amounts of training data. In
this paper, we propose a self supervised method to learn the densification of
sparse motion segmentations from single video frames. While previous approaches
towards motion segmentation build upon pre-training on large surrogate datasets
and use dense motion information as an essential cue for the pixelwise
segmentation, our model does not require pre-training and operates at test time
on single frames. It can be trained in a sequence specific way to produce high
quality dense segmentations from sparse and noisy input. We evaluate our method
on the well-known motion segmentation datasets FBMS59 and DAVIS16.
Related papers
- Appearance-Based Refinement for Object-Centric Motion Segmentation [85.2426540999329]
We introduce an appearance-based refinement method that leverages temporal consistency in video streams to correct inaccurate flow-based proposals.
Our approach involves a sequence-level selection mechanism that identifies accurate flow-predicted masks as exemplars.
Its performance is evaluated on multiple video segmentation benchmarks, including DAVIS, YouTube, SegTrackv2, and FBMS-59.
arXiv Detail & Related papers (2023-12-18T18:59:51Z) - A Simple Video Segmenter by Tracking Objects Along Axial Trajectories [30.272535124699164]
Video segmentation requires consistently segmenting and tracking objects over time.
Due to the quadratic dependency on input size, directly applying self-attention to video segmentation with high-resolution input features poses significant challenges.
We present Axial-VS, a framework that enhances video segmenters by tracking objects along axial trajectories.
arXiv Detail & Related papers (2023-11-30T13:20:09Z) - Event-Free Moving Object Segmentation from Moving Ego Vehicle [88.33470650615162]
Moving object segmentation (MOS) in dynamic scenes is an important, challenging, but under-explored research topic for autonomous driving.
Most segmentation methods leverage motion cues obtained from optical flow maps.
We propose to exploit event cameras for better video understanding, which provide rich motion cues without relying on optical flow.
arXiv Detail & Related papers (2023-04-28T23:43:10Z) - InstMove: Instance Motion for Object-centric Video Segmentation [70.16915119724757]
In this work, we study the instance-level motion and present InstMove, which stands for Instance Motion for Object-centric Video.
In comparison to pixel-wise motion, InstMove mainly relies on instance-level motion information that is free from image feature embeddings.
With only a few lines of code, InstMove can be integrated into current SOTA methods for three different video segmentation tasks.
arXiv Detail & Related papers (2023-03-14T17:58:44Z) - The Emergence of Objectness: Learning Zero-Shot Segmentation from Videos [59.12750806239545]
We show that a video has different views of the same scene related by moving components, and the right region segmentation and region flow would allow mutual view synthesis.
Our model starts with two separate pathways: an appearance pathway that outputs feature-based region segmentation for a single image, and a motion pathway that outputs motion features for a pair of images.
By training the model to minimize view synthesis errors based on segment flow, our appearance and motion pathways learn region segmentation and flow estimation automatically without building them up from low-level edges or optical flows respectively.
arXiv Detail & Related papers (2021-11-11T18:59:11Z) - Learning to Segment Rigid Motions from Two Frames [72.14906744113125]
We propose a modular network, motivated by a geometric analysis of what independent object motions can be recovered from an egomotion field.
It takes two consecutive frames as input and predicts segmentation masks for the background and multiple rigidly moving objects, which are then parameterized by 3D rigid transformations.
Our method achieves state-of-the-art performance for rigid motion segmentation on KITTI and Sintel.
arXiv Detail & Related papers (2021-01-11T04:20:30Z) - DyStaB: Unsupervised Object Segmentation via Dynamic-Static
Bootstrapping [72.84991726271024]
We describe an unsupervised method to detect and segment portions of images of live scenes that are seen moving as a coherent whole.
Our method first partitions the motion field by minimizing the mutual information between segments.
It uses the segments to learn object models that can be used for detection in a static image.
arXiv Detail & Related papers (2020-08-16T22:05:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.