Semantic Flow-guided Motion Removal Method for Robust Mapping
- URL: http://arxiv.org/abs/2010.06876v1
- Date: Wed, 14 Oct 2020 08:40:16 GMT
- Title: Semantic Flow-guided Motion Removal Method for Robust Mapping
- Authors: Xudong Lv, Boya Wang, Dong Ye, and Shuo Wang
- Abstract summary: We propose a novel motion removal method, leveraging semantic information and optical flow to extract motion regions.
The ORB-SLAM2 integrated with the proposed motion removal method achieved the best performance in both indoor and outdoor dynamic environments.
- Score: 7.801798747561309
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Moving objects in scenes are still a severe challenge for the SLAM system.
Many efforts have tried to remove the motion regions in the images by detecting
moving objects. In this way, the keypoints belonging to motion regions will be
ignored in the later calculations. In this paper, we proposed a novel motion
removal method, leveraging semantic information and optical flow to extract
motion regions. Different from previous works, we don't predict moving objects
or motion regions directly from image sequences. We computed rigid optical
flow, synthesized by the depth and pose, and compared it against the estimated
optical flow to obtain initial motion regions. Then, we utilized K-means to
finetune the motion region masks with instance segmentation masks. The
ORB-SLAM2 integrated with the proposed motion removal method achieved the best
performance in both indoor and outdoor dynamic environments.
Related papers
- Motion-adaptive Separable Collaborative Filters for Blind Motion Deblurring [71.60457491155451]
Eliminating image blur produced by various kinds of motion has been a challenging problem.
We propose a novel real-world deblurring filtering model called the Motion-adaptive Separable Collaborative Filter.
Our method provides an effective solution for real-world motion blur removal and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-04-19T19:44:24Z) - DEMOS: Dynamic Environment Motion Synthesis in 3D Scenes via Local
Spherical-BEV Perception [54.02566476357383]
We propose the first Dynamic Environment MOtion Synthesis framework (DEMOS) to predict future motion instantly according to the current scene.
We then use it to dynamically update the latent motion for final motion synthesis.
The results show our method outperforms previous works significantly and has great performance in handling dynamic environments.
arXiv Detail & Related papers (2024-03-04T05:38:16Z) - We never go out of Style: Motion Disentanglement by Subspace
Decomposition of Latent Space [38.54517335215281]
We propose a novel method to decompose motion in videos by using a pretrained image GAN model.
We discover disentangled motion subspaces in the latent space of widely used style-based GAN models.
We evaluate the disentanglement properties of motion subspaces on face and car datasets.
arXiv Detail & Related papers (2023-06-01T11:18:57Z) - Event-Free Moving Object Segmentation from Moving Ego Vehicle [90.66285408745453]
Moving object segmentation (MOS) in dynamic scenes is challenging for autonomous driving.
Most state-of-the-art methods leverage motion cues obtained from optical flow maps.
We propose to exploit event cameras for better video understanding, which provide rich motion cues without relying on optical flow.
arXiv Detail & Related papers (2023-04-28T23:43:10Z) - MovingParts: Motion-based 3D Part Discovery in Dynamic Radiance Field [42.236015785792965]
We present MovingParts, a NeRF-based method for dynamic scene reconstruction and part discovery.
Under the Lagrangian view, we parameterize the scene motion by tracking the trajectory of particles on objects.
The Lagrangian view makes it convenient to discover parts by factorizing the scene motion as a composition of part-level rigid motions.
arXiv Detail & Related papers (2023-03-10T05:06:30Z) - Self-supervised Motion Learning from Static Images [36.85209332144106]
Motion from Static Images (MoSI) learns to encode motion information.
MoSI can discover regions with large motion even without fine-tuning on the downstream datasets.
We demonstrate that MoSI can discover regions with large motion even without fine-tuning on the downstream datasets.
arXiv Detail & Related papers (2021-04-01T03:55:50Z) - Learning to Segment Rigid Motions from Two Frames [72.14906744113125]
We propose a modular network, motivated by a geometric analysis of what independent object motions can be recovered from an egomotion field.
It takes two consecutive frames as input and predicts segmentation masks for the background and multiple rigidly moving objects, which are then parameterized by 3D rigid transformations.
Our method achieves state-of-the-art performance for rigid motion segmentation on KITTI and Sintel.
arXiv Detail & Related papers (2021-01-11T04:20:30Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z) - Exposure Trajectory Recovery from Motion Blur [90.75092808213371]
Motion blur in dynamic scenes is an important yet challenging research topic.
In this paper, we define exposure trajectories, which represent the motion information contained in a blurry image.
A novel motion offset estimation framework is proposed to model pixel-wise displacements of the latent sharp image.
arXiv Detail & Related papers (2020-10-06T05:23:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.