DDN-SLAM: Real-time Dense Dynamic Neural Implicit SLAM
- URL: http://arxiv.org/abs/2401.01545v2
- Date: Sat, 9 Mar 2024 04:47:17 GMT
- Title: DDN-SLAM: Real-time Dense Dynamic Neural Implicit SLAM
- Authors: Mingrui Li, Yiming Zhou, Guangan Jiang, Tianchen Deng, Yangyang Wang,
Hongyu Wang
- Abstract summary: We introduce DDN-SLAM, the first real-time dense dynamic neural implicit SLAM system integrating semantic features.
Compared to existing neural implicit SLAM systems, the tracking results on dynamic datasets indicate an average 90% improvement in Average Trajectory Error (ATE) accuracy.
- Score: 5.267859554944985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: SLAM systems based on NeRF have demonstrated superior performance in
rendering quality and scene reconstruction for static environments compared to
traditional dense SLAM. However, they encounter tracking drift and mapping
errors in real-world scenarios with dynamic interferences. To address these
issues, we introduce DDN-SLAM, the first real-time dense dynamic neural
implicit SLAM system integrating semantic features. To address dynamic tracking
interferences, we propose a feature point segmentation method that combines
semantic features with a mixed Gaussian distribution model. To avoid incorrect
background removal, we propose a mapping strategy based on sparse point cloud
sampling and background restoration. We propose a dynamic semantic loss to
eliminate dynamic occlusions. Experimental results demonstrate that DDN-SLAM is
capable of robustly tracking and producing high-quality reconstructions in
dynamic environments, while appropriately preserving potential dynamic objects.
Compared to existing neural implicit SLAM systems, the tracking results on
dynamic datasets indicate an average 90% improvement in Average Trajectory
Error (ATE) accuracy.
Related papers
- Learn to Memorize and to Forget: A Continual Learning Perspective of Dynamic SLAM [17.661231232206028]
Simultaneous localization and mapping (SLAM) with implicit neural representations has received extensive attention.
We propose a novel SLAM framework for dynamic environments.
arXiv Detail & Related papers (2024-07-18T09:35:48Z) - KFD-NeRF: Rethinking Dynamic NeRF with Kalman Filter [49.85369344101118]
We introduce KFD-NeRF, a novel dynamic neural radiance field integrated with an efficient and high-quality motion reconstruction framework based on Kalman filtering.
Our key idea is to model the dynamic radiance field as a dynamic system whose temporally varying states are estimated based on two sources of knowledge: observations and predictions.
Our KFD-NeRF demonstrates similar or even superior performance within comparable computational time and state-of-the-art view synthesis performance with thorough training.
arXiv Detail & Related papers (2024-07-18T05:48:24Z) - NID-SLAM: Neural Implicit Representation-based RGB-D SLAM in dynamic environments [9.706447888754614]
We present NID-SLAM, which significantly improves the performance of neural SLAM in dynamic environments.
We propose a new approach to enhance inaccurate regions in semantic masks, particularly in marginal areas.
We also introduce a selection strategy for dynamic scenes, which enhances camera tracking robustness against large-scale objects.
arXiv Detail & Related papers (2024-01-02T12:35:03Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - 3DS-SLAM: A 3D Object Detection based Semantic SLAM towards Dynamic
Indoor Environments [1.4901625182926226]
We introduce 3DS-SLAM, 3D Semantic SLAM, tailored for dynamic scenes with visual 3D object detection.
The 3DS-SLAM is a tightly-coupled algorithm resolving both semantic and geometric constraints sequentially.
It exhibits an average improvement of 98.01% across the dynamic sequences of the TUM RGB-D dataset.
arXiv Detail & Related papers (2023-10-10T07:48:40Z) - Alignment-free HDR Deghosting with Semantics Consistent Transformer [76.91669741684173]
High dynamic range imaging aims to retrieve information from multiple low-dynamic range inputs to generate realistic output.
Existing methods often focus on the spatial misalignment across input frames caused by the foreground and/or camera motion.
We propose a novel alignment-free network with a Semantics Consistent Transformer (SCTNet) with both spatial and channel attention modules.
arXiv Detail & Related papers (2023-05-29T15:03:23Z) - Point-SLAM: Dense Neural Point Cloud-based SLAM [61.96492935210654]
We propose a dense neural simultaneous localization and mapping (SLAM) approach for monocular RGBD input.
We demonstrate that both tracking and mapping can be performed with the same point-based neural scene representation.
arXiv Detail & Related papers (2023-04-09T16:48:26Z) - Using Detection, Tracking and Prediction in Visual SLAM to Achieve
Real-time Semantic Mapping of Dynamic Scenarios [70.70421502784598]
RDS-SLAM can build semantic maps at object level for dynamic scenarios in real time using only one commonly used Intel Core i7 CPU.
We evaluate RDS-SLAM in TUM RGB-D dataset, and experimental results show that RDS-SLAM can run with 30.3 ms per frame in dynamic scenarios.
arXiv Detail & Related papers (2022-10-10T11:03:32Z) - Det-SLAM: A semantic visual SLAM for highly dynamic scenes using
Detectron2 [0.0]
This research combines the visual SLAM systems ORB-SLAM3 and Detectron2 to present the Det-SLAM system.
Det-SLAM is more resilient than previous dynamic SLAM systems and can lower the estimated error of camera posture in dynamic indoor scenarios.
arXiv Detail & Related papers (2022-10-01T13:25:11Z) - DOT: Dynamic Object Tracking for Visual SLAM [83.69544718120167]
DOT combines instance segmentation and multi-view geometry to generate masks for dynamic objects.
To determine which objects are actually moving, DOT segments first instances of potentially dynamic objects and then, with the estimated camera motion, tracks such objects by minimizing the photometric reprojection error.
Our results show that our approach improves significantly the accuracy and robustness of ORB-SLAM 2, especially in highly dynamic scenes.
arXiv Detail & Related papers (2020-09-30T18:36:28Z) - FlowFusion: Dynamic Dense RGB-D SLAM Based on Optical Flow [17.040818114071833]
We present a novel dense RGB-D SLAM solution that simultaneously accomplishes the dynamic/static segmentation and camera ego-motion estimation.
Our novelty is using optical flow residuals to highlight the dynamic semantics in the RGB-D point clouds.
arXiv Detail & Related papers (2020-03-11T04:00:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.