Unsupervised Region-Growing Network for Object Segmentation in
Atmospheric Turbulence
- URL: http://arxiv.org/abs/2311.03572v1
- Date: Mon, 6 Nov 2023 22:17:18 GMT
- Title: Unsupervised Region-Growing Network for Object Segmentation in
Atmospheric Turbulence
- Authors: Dehao Qin, Ripon Saha, Suren Jayasuriya, Jinwei Ye and Nianyi Li
- Abstract summary: We present a two-stage unsupervised object segmentation network tailored for dynamic scenes affected by atmospheric turbulence.
In the first stage, we utilize averaged optical flow from turbulence-distorted image sequences to craft preliminary masks for each moving object.
We release the first moving object segmentation dataset of turbulence-affected videos, complete with manually annotated ground truth masks.
- Score: 11.62754560134596
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a two-stage unsupervised foreground object
segmentation network tailored for dynamic scenes affected by atmospheric
turbulence. In the first stage, we utilize averaged optical flow from
turbulence-distorted image sequences to feed a novel region-growing algorithm,
crafting preliminary masks for each moving object in the video. In the second
stage, we employ a U-Net architecture with consistency and grouping losses to
further refine these masks optimizing their spatio-temporal alignment. Our
approach does not require labeled training data and works across varied
turbulence strengths for long-range video. Furthermore, we release the first
moving object segmentation dataset of turbulence-affected videos, complete with
manually annotated ground truth masks. Our method, evaluated on this new
dataset, demonstrates superior segmentation accuracy and robustness as compared
to current state-of-the-art unsupervised methods.
Related papers
- Segment Any Motion in Videos [80.72424676419755]
We propose a novel approach for moving object segmentation that combines long-range trajectory motion cues with DINO-based semantic features.
Our model employs Spatio-Temporal Trajectory Attention and Motion-Semantic Decoupled Embedding to prioritize motion while integrating semantic support.
arXiv Detail & Related papers (2025-03-28T09:34:11Z) - Instance-Level Moving Object Segmentation from a Single Image with Events [84.12761042512452]
Moving object segmentation plays a crucial role in understanding dynamic scenes involving multiple moving objects.
Previous methods encounter difficulties in distinguishing whether pixel displacements of an object are caused by camera motion or object motion.
Recent advances exploit the motion sensitivity of novel event cameras to counter conventional images' inadequate motion modeling capabilities.
We propose the first instance-level moving object segmentation framework that integrates complementary texture and motion cues.
arXiv Detail & Related papers (2025-02-18T15:56:46Z) - PEMF-VTO: Point-Enhanced Video Virtual Try-on via Mask-free Paradigm [21.1235226974745]
Video Virtual Try-on aims to seamlessly transfer a reference garment onto a target person in a video.
Existing methods typically rely on inpainting masks to define the try-on area.
We propose PEMF-VTO, a novel Point-Enhanced Mask-Free Video Virtual Try-On framework.
arXiv Detail & Related papers (2024-12-04T04:24:15Z) - Lester: rotoscope animation through video object segmentation and
tracking [0.0]
Lester is a novel method to automatically synthetise retro-style 2D animations from videos.
Video frames are processed with the Segment Anything Model (SAM) and the resulting masks are tracked through subsequent frames with DeAOT.
Results show that the method exhibits an excellent temporal consistency and can correctly process videos with different poses and appearances.
arXiv Detail & Related papers (2024-02-15T11:15:54Z) - InstMove: Instance Motion for Object-centric Video Segmentation [70.16915119724757]
In this work, we study the instance-level motion and present InstMove, which stands for Instance Motion for Object-centric Video.
In comparison to pixel-wise motion, InstMove mainly relies on instance-level motion information that is free from image feature embeddings.
With only a few lines of code, InstMove can be integrated into current SOTA methods for three different video segmentation tasks.
arXiv Detail & Related papers (2023-03-14T17:58:44Z) - Towards Robust Video Object Segmentation with Adaptive Object
Calibration [18.094698623128146]
Video object segmentation (VOS) aims at segmenting objects in all target frames of a video, given annotated object masks of reference frames.
We propose a new deep network, which can adaptively construct object representations and calibrate object masks to achieve stronger robustness.
Our model achieves the state-of-the-art performance among existing published works, and also exhibits superior robustness against perturbations.
arXiv Detail & Related papers (2022-07-02T17:51:29Z) - Implicit Motion Handling for Video Camouflaged Object Detection [60.98467179649398]
We propose a new video camouflaged object detection (VCOD) framework.
It can exploit both short-term and long-term temporal consistency to detect camouflaged objects from video frames.
arXiv Detail & Related papers (2022-03-14T17:55:41Z) - Object Propagation via Inter-Frame Attentions for Temporally Stable
Video Instance Segmentation [51.68840525174265]
Video instance segmentation aims to detect, segment, and track objects in a video.
Current approaches extend image-level segmentation algorithms to the temporal domain.
We propose a video instance segmentation method that alleviates the problem due to missing detections.
arXiv Detail & Related papers (2021-11-15T04:15:57Z) - Occlusion-Aware Video Object Inpainting [72.38919601150175]
This paper presents occlusion-aware video object inpainting, which recovers both the complete shape and appearance for occluded objects in videos.
Our technical contribution VOIN jointly performs video object shape completion and occluded texture generation.
For more realistic results, VOIN is optimized using both T-PatchGAN and a newoc-temporal YouTube attention-based multi-class discriminator.
arXiv Detail & Related papers (2021-08-15T15:46:57Z) - Generating Masks from Boxes by Mining Spatio-Temporal Consistencies in
Videos [159.02703673838639]
We introduce a method for generating segmentation masks from per-frame bounding box annotations in videos.
We use our resulting accurate masks for weakly supervised training of video object segmentation (VOS) networks.
The additional data provides substantially better generalization performance leading to state-of-the-art results in both the VOS and more challenging tracking domain.
arXiv Detail & Related papers (2021-01-06T18:56:24Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.