3DS-SLAM: A 3D Object Detection based Semantic SLAM towards Dynamic
Indoor Environments
- URL: http://arxiv.org/abs/2310.06385v1
- Date: Tue, 10 Oct 2023 07:48:40 GMT
- Title: 3DS-SLAM: A 3D Object Detection based Semantic SLAM towards Dynamic
Indoor Environments
- Authors: Ghanta Sai Krishna, Kundrapu Supriya, Sabur Baidya
- Abstract summary: We introduce 3DS-SLAM, 3D Semantic SLAM, tailored for dynamic scenes with visual 3D object detection.
The 3DS-SLAM is a tightly-coupled algorithm resolving both semantic and geometric constraints sequentially.
It exhibits an average improvement of 98.01% across the dynamic sequences of the TUM RGB-D dataset.
- Score: 1.4901625182926226
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The existence of variable factors within the environment can cause a decline
in camera localization accuracy, as it violates the fundamental assumption of a
static environment in Simultaneous Localization and Mapping (SLAM) algorithms.
Recent semantic SLAM systems towards dynamic environments either rely solely on
2D semantic information, or solely on geometric information, or combine their
results in a loosely integrated manner. In this research paper, we introduce
3DS-SLAM, 3D Semantic SLAM, tailored for dynamic scenes with visual 3D object
detection. The 3DS-SLAM is a tightly-coupled algorithm resolving both semantic
and geometric constraints sequentially. We designed a 3D part-aware hybrid
transformer for point cloud-based object detection to identify dynamic objects.
Subsequently, we propose a dynamic feature filter based on HDBSCAN clustering
to extract objects with significant absolute depth differences. When compared
against ORB-SLAM2, 3DS-SLAM exhibits an average improvement of 98.01% across
the dynamic sequences of the TUM RGB-D dataset. Furthermore, it surpasses the
performance of the other four leading SLAM systems designed for dynamic
environments.
Related papers
- V3D-SLAM: Robust RGB-D SLAM in Dynamic Environments with 3D Semantic Geometry Voting [1.3493547928462395]
Simultaneous localization and mapping (SLAM) in highly dynamic environments is challenging due to the correlation between moving objects and the camera pose.
We propose a robust method, V3D-SLAM, to remove moving objects via two lightweight re-evaluation stages.
Our experiment on the TUM RGB-D benchmark on dynamic sequences with ground-truth camera trajectories showed that our methods outperform the most recent state-of-the-art SLAM methods.
arXiv Detail & Related papers (2024-10-15T21:08:08Z) - Dynamic Scene Understanding through Object-Centric Voxelization and Neural Rendering [57.895846642868904]
We present a 3D generative model named DynaVol-S for dynamic scenes that enables object-centric learning.
voxelization infers per-object occupancy probabilities at individual spatial locations.
Our approach integrates 2D semantic features to create 3D semantic grids, representing the scene through multiple disentangled voxel grids.
arXiv Detail & Related papers (2024-07-30T15:33:58Z) - HUGS: Holistic Urban 3D Scene Understanding via Gaussian Splatting [53.6394928681237]
holistic understanding of urban scenes based on RGB images is a challenging yet important problem.
Our main idea involves the joint optimization of geometry, appearance, semantics, and motion using a combination of static and dynamic 3D Gaussians.
Our approach offers the ability to render new viewpoints in real-time, yielding 2D and 3D semantic information with high accuracy.
arXiv Detail & Related papers (2024-03-19T13:39:05Z) - DDN-SLAM: Real-time Dense Dynamic Neural Implicit SLAM [5.267859554944985]
We introduce DDN-SLAM, the first real-time dense dynamic neural implicit SLAM system integrating semantic features.
Compared to existing neural implicit SLAM systems, the tracking results on dynamic datasets indicate an average 90% improvement in Average Trajectory Error (ATE) accuracy.
arXiv Detail & Related papers (2024-01-03T05:42:17Z) - NID-SLAM: Neural Implicit Representation-based RGB-D SLAM in dynamic environments [9.706447888754614]
We present NID-SLAM, which significantly improves the performance of neural SLAM in dynamic environments.
We propose a new approach to enhance inaccurate regions in semantic masks, particularly in marginal areas.
We also introduce a selection strategy for dynamic scenes, which enhances camera tracking robustness against large-scale objects.
arXiv Detail & Related papers (2024-01-02T12:35:03Z) - NeuSE: Neural SE(3)-Equivariant Embedding for Consistent Spatial
Understanding with Objects [53.111397800478294]
We present NeuSE, a novel Neural SE(3)-Equivariant Embedding for objects.
NeuSE serves as a compact point cloud surrogate for complete object models.
Our proposed SLAM paradigm, using NeuSE for object shape and pose characterization, can operate independently or in conjunction with typical SLAM systems.
arXiv Detail & Related papers (2023-03-13T17:30:43Z) - Using Detection, Tracking and Prediction in Visual SLAM to Achieve
Real-time Semantic Mapping of Dynamic Scenarios [70.70421502784598]
RDS-SLAM can build semantic maps at object level for dynamic scenarios in real time using only one commonly used Intel Core i7 CPU.
We evaluate RDS-SLAM in TUM RGB-D dataset, and experimental results show that RDS-SLAM can run with 30.3 ms per frame in dynamic scenarios.
arXiv Detail & Related papers (2022-10-10T11:03:32Z) - MOTSLAM: MOT-assisted monocular dynamic SLAM using single-view depth
estimation [5.33931801679129]
MOTSLAM is a dynamic visual SLAM system with the monocular configuration that tracks both poses and bounding boxes of dynamic objects.
Our experiments on the KITTI dataset demonstrate that our system has reached best performance on both camera ego-motion and object tracking on monocular dynamic SLAM.
arXiv Detail & Related papers (2022-10-05T06:07:10Z) - Visual-Inertial Multi-Instance Dynamic SLAM with Object-level
Relocalisation [14.302118093865849]
We present a tightly-coupled visual-inertial object-level multi-instance dynamic SLAM system.
It can robustly optimise for the camera pose, velocity, IMU biases and build a dense 3D reconstruction object-level map of the environment.
arXiv Detail & Related papers (2022-08-08T17:13:24Z) - DOT: Dynamic Object Tracking for Visual SLAM [83.69544718120167]
DOT combines instance segmentation and multi-view geometry to generate masks for dynamic objects.
To determine which objects are actually moving, DOT segments first instances of potentially dynamic objects and then, with the estimated camera motion, tracks such objects by minimizing the photometric reprojection error.
Our results show that our approach improves significantly the accuracy and robustness of ORB-SLAM 2, especially in highly dynamic scenes.
arXiv Detail & Related papers (2020-09-30T18:36:28Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.