TwistSLAM: Constrained SLAM in Dynamic Environment
- URL: http://arxiv.org/abs/2202.12384v1
- Date: Thu, 24 Feb 2022 22:08:45 GMT
- Title: TwistSLAM: Constrained SLAM in Dynamic Environment
- Authors: Mathieu Gonzalez, Eric Marchand, Amine Kacete, J\'er\^ome Royan
- Abstract summary: We present TwistSLAM, a semantic, dynamic, stereo SLAM system that can track dynamic objects in the scene.
Our algorithm creates clusters of points according to their semantic class.
It uses the static parts of the environment to robustly localize the camera and tracks the remaining objects.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Moving objects are present in most scenes of our life. However they can be
very problematic for classical SLAM algorithms that assume the scene to be
rigid. This assumption limits the applicability of those algorithms as they are
unable to accurately estimate the camera pose and world structure in many
scenarios. Some SLAM systems have been proposed to detect and mask out dynamic
objects, making the static scene assumption valid. However this information can
allow the system to track objects within the scene, while tracking the camera,
which can be crucial for some applications. In this paper we present TwistSLAM
a semantic, dynamic, stereo SLAM system that can track dynamic objects in the
scene. Our algorithm creates clusters of points according to their semantic
class. It uses the static parts of the environment to robustly localize the
camera and tracks the remaining objects. We propose a new formulation for the
tracking and the bundle adjustment to take in account the characteristics of
mechanical joints between clusters to constrain and improve their pose
estimation. We evaluate our approach on several sequences from a public dataset
and show that we improve camera and object tracking compared to state of the
art.
Related papers
- DORT: Modeling Dynamic Objects in Recurrent for Multi-Camera 3D Object
Detection and Tracking [67.34803048690428]
We propose to model Dynamic Objects in RecurrenT (DORT) to tackle this problem.
DORT extracts object-wise local volumes for motion estimation that also alleviates the heavy computational burden.
It is flexible and practical that can be plugged into most camera-based 3D object detectors.
arXiv Detail & Related papers (2023-03-29T12:33:55Z) - NEWTON: Neural View-Centric Mapping for On-the-Fly Large-Scale SLAM [51.21564182169607]
Newton is a view-centric mapping method that dynamically constructs neural fields based on run-time observation.
Our method enables camera pose updates using loop closures and scene boundary updates by representing the scene with multiple neural fields.
The experimental results demonstrate the superior performance of our method over existing world-centric neural field-based SLAM systems.
arXiv Detail & Related papers (2023-03-23T20:22:01Z) - Semantic Attention Flow Fields for Monocular Dynamic Scene Decomposition [51.67493993845143]
We reconstruct a neural volume that captures time-varying color, density, scene flow, semantics, and attention information.
The semantics and attention let us identify salient foreground objects separately from the background across spacetime.
We show that this method can decompose dynamic scenes in an unsupervised way with competitive performance to a supervised method.
arXiv Detail & Related papers (2023-03-02T19:00:05Z) - DynIBaR: Neural Dynamic Image-Based Rendering [79.44655794967741]
We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene.
We adopt a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views.
We demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets.
arXiv Detail & Related papers (2022-11-20T20:57:02Z) - D-InLoc++: Indoor Localization in Dynamic Environments [2.9398911304923447]
We show that the movable objects incorporate non-negligible localization error and present a new method to predict the six-degree-of-freedom (6DoF) pose more robustly.
The masks of dynamic objects are employed in the relative pose estimation step and in the final sorting of camera pose proposal.
arXiv Detail & Related papers (2022-09-21T08:35:32Z) - TwistSLAM++: Fusing multiple modalities for accurate dynamic semantic
SLAM [0.0]
TwistSLAM++ is a semantic, dynamic, SLAM system that fuses stereo images and LiDAR information.
We show on classical benchmarks that this fusion approach based on multimodal information improves the accuracy of object tracking.
arXiv Detail & Related papers (2022-09-16T12:28:21Z) - Visual-Inertial Multi-Instance Dynamic SLAM with Object-level
Relocalisation [14.302118093865849]
We present a tightly-coupled visual-inertial object-level multi-instance dynamic SLAM system.
It can robustly optimise for the camera pose, velocity, IMU biases and build a dense 3D reconstruction object-level map of the environment.
arXiv Detail & Related papers (2022-08-08T17:13:24Z) - ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving
Cameras in the Wild [57.37891682117178]
We present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence from pairwise optical flow.
A novel neural network architecture is proposed for processing irregular point trajectory data.
Experiments on MPI Sintel dataset show that our system produces significantly more accurate camera trajectories.
arXiv Detail & Related papers (2022-07-19T09:19:45Z) - DynaSLAM II: Tightly-Coupled Multi-Object Tracking and SLAM [2.9822184411723645]
DynaSLAM II is a visual SLAM system for stereo and RGB-D configurations that tightly integrates the multi-object tracking capability.
We demonstrate that tracking dynamic objects does not only provide rich clues for scene understanding but is also beneficial for camera tracking.
arXiv Detail & Related papers (2020-10-15T15:25:30Z) - DOT: Dynamic Object Tracking for Visual SLAM [83.69544718120167]
DOT combines instance segmentation and multi-view geometry to generate masks for dynamic objects.
To determine which objects are actually moving, DOT segments first instances of potentially dynamic objects and then, with the estimated camera motion, tracks such objects by minimizing the photometric reprojection error.
Our results show that our approach improves significantly the accuracy and robustness of ORB-SLAM 2, especially in highly dynamic scenes.
arXiv Detail & Related papers (2020-09-30T18:36:28Z) - Removing Dynamic Objects for Static Scene Reconstruction using Light
Fields [2.286041284499166]
Dynamic environments pose challenges to visual simultaneous localization and mapping (SLAM) algorithms.
Light Fields capture a bundle of light rays emerging from a single point in space, allowing us to see through dynamic objects by refocusing past them.
We present a method to synthesize a refocused image of the static background in the presence of dynamic objects.
arXiv Detail & Related papers (2020-03-24T19:05:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.