Semantic Flow-guided Motion Removal Method for Robust Mapping
- URL: http://arxiv.org/abs/2010.06876v1
- Date: Wed, 14 Oct 2020 08:40:16 GMT
- Title: Semantic Flow-guided Motion Removal Method for Robust Mapping
- Authors: Xudong Lv, Boya Wang, Dong Ye, and Shuo Wang
- Abstract summary: We propose a novel motion removal method, leveraging semantic information and optical flow to extract motion regions.
The ORB-SLAM2 integrated with the proposed motion removal method achieved the best performance in both indoor and outdoor dynamic environments.
- Score: 7.801798747561309
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Moving objects in scenes are still a severe challenge for the SLAM system.
Many efforts have tried to remove the motion regions in the images by detecting
moving objects. In this way, the keypoints belonging to motion regions will be
ignored in the later calculations. In this paper, we proposed a novel motion
removal method, leveraging semantic information and optical flow to extract
motion regions. Different from previous works, we don't predict moving objects
or motion regions directly from image sequences. We computed rigid optical
flow, synthesized by the depth and pose, and compared it against the estimated
optical flow to obtain initial motion regions. Then, we utilized K-means to
finetune the motion region masks with instance segmentation masks. The
ORB-SLAM2 integrated with the proposed motion removal method achieved the best
performance in both indoor and outdoor dynamic environments.
Related papers
- Investigation of moving objects through atmospheric turbulence from a non-stationary platform [0.5735035463793008]
In this work, we extract the optical flow field corresponding to moving objects from an image sequence captured from a moving camera.
Our procedure first computes the optical flow field and creates a motion model to compensate for the flow field induced by camera motion.
All of the sequences and code used in this work are open source and are available by contacting the authors.
arXiv Detail & Related papers (2024-10-29T00:54:28Z) - Motion-adaptive Separable Collaborative Filters for Blind Motion Deblurring [71.60457491155451]
Eliminating image blur produced by various kinds of motion has been a challenging problem.
We propose a novel real-world deblurring filtering model called the Motion-adaptive Separable Collaborative Filter.
Our method provides an effective solution for real-world motion blur removal and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-04-19T19:44:24Z) - DEMOS: Dynamic Environment Motion Synthesis in 3D Scenes via Local
Spherical-BEV Perception [54.02566476357383]
We propose the first Dynamic Environment MOtion Synthesis framework (DEMOS) to predict future motion instantly according to the current scene.
We then use it to dynamically update the latent motion for final motion synthesis.
The results show our method outperforms previous works significantly and has great performance in handling dynamic environments.
arXiv Detail & Related papers (2024-03-04T05:38:16Z) - InstMove: Instance Motion for Object-centric Video Segmentation [70.16915119724757]
In this work, we study the instance-level motion and present InstMove, which stands for Instance Motion for Object-centric Video.
In comparison to pixel-wise motion, InstMove mainly relies on instance-level motion information that is free from image feature embeddings.
With only a few lines of code, InstMove can be integrated into current SOTA methods for three different video segmentation tasks.
arXiv Detail & Related papers (2023-03-14T17:58:44Z) - MovingParts: Motion-based 3D Part Discovery in Dynamic Radiance Field [42.236015785792965]
We present MovingParts, a NeRF-based method for dynamic scene reconstruction and part discovery.
Under the Lagrangian view, we parameterize the scene motion by tracking the trajectory of particles on objects.
The Lagrangian view makes it convenient to discover parts by factorizing the scene motion as a composition of part-level rigid motions.
arXiv Detail & Related papers (2023-03-10T05:06:30Z) - Self-supervised Motion Learning from Static Images [36.85209332144106]
Motion from Static Images (MoSI) learns to encode motion information.
MoSI can discover regions with large motion even without fine-tuning on the downstream datasets.
We demonstrate that MoSI can discover regions with large motion even without fine-tuning on the downstream datasets.
arXiv Detail & Related papers (2021-04-01T03:55:50Z) - Learning to Segment Rigid Motions from Two Frames [72.14906744113125]
We propose a modular network, motivated by a geometric analysis of what independent object motions can be recovered from an egomotion field.
It takes two consecutive frames as input and predicts segmentation masks for the background and multiple rigidly moving objects, which are then parameterized by 3D rigid transformations.
Our method achieves state-of-the-art performance for rigid motion segmentation on KITTI and Sintel.
arXiv Detail & Related papers (2021-01-11T04:20:30Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z) - Exposure Trajectory Recovery from Motion Blur [90.75092808213371]
Motion blur in dynamic scenes is an important yet challenging research topic.
In this paper, we define exposure trajectories, which represent the motion information contained in a blurry image.
A novel motion offset estimation framework is proposed to model pixel-wise displacements of the latent sharp image.
arXiv Detail & Related papers (2020-10-06T05:23:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.