Self-supervised Sparse to Dense Motion Segmentation
- URL: http://arxiv.org/abs/2008.07872v1
- Date: Tue, 18 Aug 2020 11:40:18 GMT
- Title: Self-supervised Sparse to Dense Motion Segmentation
- Authors: Amirhossein Kardoost, Kalun Ho, Peter Ochs, Margret Keuper
- Abstract summary: We propose a self supervised method to learn the densification of sparse motion segmentations from single video frames.
We evaluate our method on the well-known motion segmentation datasets FBMS59 and DAVIS16.
- Score: 13.888344214818737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Observable motion in videos can give rise to the definition of objects moving
with respect to the scene. The task of segmenting such moving objects is
referred to as motion segmentation and is usually tackled either by aggregating
motion information in long, sparse point trajectories, or by directly producing
per frame dense segmentations relying on large amounts of training data. In
this paper, we propose a self supervised method to learn the densification of
sparse motion segmentations from single video frames. While previous approaches
towards motion segmentation build upon pre-training on large surrogate datasets
and use dense motion information as an essential cue for the pixelwise
segmentation, our model does not require pre-training and operates at test time
on single frames. It can be trained in a sequence specific way to produce high
quality dense segmentations from sparse and noisy input. We evaluate our method
on the well-known motion segmentation datasets FBMS59 and DAVIS16.
Related papers
- Instance-Level Moving Object Segmentation from a Single Image with Events [84.12761042512452]
Moving object segmentation plays a crucial role in understanding dynamic scenes involving multiple moving objects.
Previous methods encounter difficulties in distinguishing whether pixel displacements of an object are caused by camera motion or object motion.
Recent advances exploit the motion sensitivity of novel event cameras to counter conventional images' inadequate motion modeling capabilities.
We propose the first instance-level moving object segmentation framework that integrates complementary texture and motion cues.
arXiv Detail & Related papers (2025-02-18T15:56:46Z) - Multi-Granularity Video Object Segmentation [36.06127939037613]
We propose a large-scale, densely annotated multi-granularity video object segmentation (MUG-VOS) dataset.
We automatically collected a training set that assists in tracking both salient and non-salient objects, and we also curated a human-annotated test set for reliable evaluation.
In addition, we present memory-based mask propagation model (MMPM), trained and evaluated on MUG-VOS dataset.
arXiv Detail & Related papers (2024-12-02T13:17:41Z) - Appearance-Based Refinement for Object-Centric Motion Segmentation [85.2426540999329]
We introduce an appearance-based refinement method that leverages temporal consistency in video streams to correct inaccurate flow-based proposals.
Our approach involves a sequence-level selection mechanism that identifies accurate flow-predicted masks as exemplars.
Its performance is evaluated on multiple video segmentation benchmarks, including DAVIS, YouTube, SegTrackv2, and FBMS-59.
arXiv Detail & Related papers (2023-12-18T18:59:51Z) - A Simple Video Segmenter by Tracking Objects Along Axial Trajectories [30.272535124699164]
Video segmentation requires consistently segmenting and tracking objects over time.
Due to the quadratic dependency on input size, directly applying self-attention to video segmentation with high-resolution input features poses significant challenges.
We present Axial-VS, a framework that enhances video segmenters by tracking objects along axial trajectories.
arXiv Detail & Related papers (2023-11-30T13:20:09Z) - Event-Free Moving Object Segmentation from Moving Ego Vehicle [88.33470650615162]
Moving object segmentation (MOS) in dynamic scenes is an important, challenging, but under-explored research topic for autonomous driving.
Most segmentation methods leverage motion cues obtained from optical flow maps.
We propose to exploit event cameras for better video understanding, which provide rich motion cues without relying on optical flow.
arXiv Detail & Related papers (2023-04-28T23:43:10Z) - InstMove: Instance Motion for Object-centric Video Segmentation [70.16915119724757]
In this work, we study the instance-level motion and present InstMove, which stands for Instance Motion for Object-centric Video.
In comparison to pixel-wise motion, InstMove mainly relies on instance-level motion information that is free from image feature embeddings.
With only a few lines of code, InstMove can be integrated into current SOTA methods for three different video segmentation tasks.
arXiv Detail & Related papers (2023-03-14T17:58:44Z) - Learning to Segment Rigid Motions from Two Frames [72.14906744113125]
We propose a modular network, motivated by a geometric analysis of what independent object motions can be recovered from an egomotion field.
It takes two consecutive frames as input and predicts segmentation masks for the background and multiple rigidly moving objects, which are then parameterized by 3D rigid transformations.
Our method achieves state-of-the-art performance for rigid motion segmentation on KITTI and Sintel.
arXiv Detail & Related papers (2021-01-11T04:20:30Z) - DyStaB: Unsupervised Object Segmentation via Dynamic-Static
Bootstrapping [72.84991726271024]
We describe an unsupervised method to detect and segment portions of images of live scenes that are seen moving as a coherent whole.
Our method first partitions the motion field by minimizing the mutual information between segments.
It uses the segments to learn object models that can be used for detection in a static image.
arXiv Detail & Related papers (2020-08-16T22:05:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.