PlaneSDF-based Change Detection for Long-term Dense Mapping
- URL: http://arxiv.org/abs/2207.08323v1
- Date: Mon, 18 Jul 2022 00:19:45 GMT
- Title: PlaneSDF-based Change Detection for Long-term Dense Mapping
- Authors: Jiahui Fu, Chengyuan Lin, Yuichi Taguchi, Andrea Cohen, Yifu Zhang,
Stephen Mylabathula, and John J. Leonard
- Abstract summary: We look into the problem of change detection based on a novel map representation, dubbed Plane Signed Distance Fields (PlaneSDF)
Given point clouds of the source and target scenes, we propose a three-step PlaneSDF-based change detection approach.
We evaluate our approach on both synthetic and real-world datasets and demonstrate its effectiveness via the task of changed object detection.
- Score: 10.159737713094119
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ability to process environment maps across multiple sessions is critical
for robots operating over extended periods of time. Specifically, it is
desirable for autonomous agents to detect changes amongst maps of different
sessions so as to gain a conflict-free understanding of the current
environment. In this paper, we look into the problem of change detection based
on a novel map representation, dubbed Plane Signed Distance Fields (PlaneSDF),
where dense maps are represented as a collection of planes and their associated
geometric components in SDF volumes. Given point clouds of the source and
target scenes, we propose a three-step PlaneSDF-based change detection
approach: (1) PlaneSDF volumes are instantiated within each scene and
registered across scenes using plane poses; 2D height maps and object maps are
extracted per volume via height projection and connected component analysis.
(2) Height maps are compared and intersected with the object map to produce a
2D change location mask for changed object candidates in the source scene. (3)
3D geometric validation is performed using SDF-derived features per object
candidate for change mask refinement. We evaluate our approach on both
synthetic and real-world datasets and demonstrate its effectiveness via the
task of changed object detection.
Related papers
- Boosting Cross-Domain Point Classification via Distilling Relational Priors from 2D Transformers [59.0181939916084]
Traditional 3D networks mainly focus on local geometric details and ignore the topological structure between local geometries.
We propose a novel Priors Distillation (RPD) method to extract priors from the well-trained transformers on massive images.
Experiments on the PointDA-10 and the Sim-to-Real datasets verify that the proposed method consistently achieves the state-of-the-art performance of UDA for point cloud classification.
arXiv Detail & Related papers (2024-07-26T06:29:09Z) - VFMM3D: Releasing the Potential of Image by Vision Foundation Model for Monocular 3D Object Detection [80.62052650370416]
monocular 3D object detection holds significant importance across various applications, including autonomous driving and robotics.
In this paper, we present VFMM3D, an innovative framework that leverages the capabilities of Vision Foundation Models (VFMs) to accurately transform single-view images into LiDAR point cloud representations.
arXiv Detail & Related papers (2024-04-15T03:12:12Z) - Towards Generalizable Multi-Camera 3D Object Detection via Perspective
Debiasing [28.874014617259935]
Multi-Camera 3D Object Detection (MC3D-Det) has gained prominence with the advent of bird's-eye view (BEV) approaches.
We propose a novel method that aligns 3D detection with 2D camera plane results, ensuring consistent and accurate detections.
arXiv Detail & Related papers (2023-10-17T15:31:28Z) - Constructing Metric-Semantic Maps using Floor Plan Priors for Long-Term
Indoor Localization [29.404446814219202]
In this paper, we address the task of constructing a metric-semantic map for the purpose of long-term object-based localization.
We exploit 3D object detections from monocular RGB frames for both, the object-based map construction, and for globally localizing in the constructed map.
We evaluate our map construction in an office building, and test our long-term localization approach on challenging sequences recorded in the same environment over nine months.
arXiv Detail & Related papers (2023-03-20T09:33:05Z) - Object-level 3D Semantic Mapping using a Network of Smart Edge Sensors [25.393382192511716]
We extend a multi-view 3D semantic mapping system consisting of a network of distributed edge sensors with object-level information.
Our method is evaluated on the public Behave dataset where it shows pose estimation within a few centimeters and in real-world experiments with the sensor network in a challenging lab environment.
arXiv Detail & Related papers (2022-11-21T11:13:08Z) - CAGroup3D: Class-Aware Grouping for 3D Object Detection on Point Clouds [55.44204039410225]
We present a novel two-stage fully sparse convolutional 3D object detection framework, named CAGroup3D.
Our proposed method first generates some high-quality 3D proposals by leveraging the class-aware local group strategy on the object surface voxels.
To recover the features of missed voxels due to incorrect voxel-wise segmentation, we build a fully sparse convolutional RoI pooling module.
arXiv Detail & Related papers (2022-10-09T13:38:48Z) - Robust Change Detection Based on Neural Descriptor Fields [53.111397800478294]
We develop an object-level online change detection approach that is robust to partially overlapping observations and noisy localization results.
By associating objects via shape code similarity and comparing local object-neighbor spatial layout, our proposed approach demonstrates robustness to low observation overlap and localization noises.
arXiv Detail & Related papers (2022-08-01T17:45:36Z) - TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and
Reconstruction [57.1209039399599]
We propose a map representation that allows maintaining a single volume for the entire scene and all the objects therein.
In a multiple dynamic object tracking and reconstruction scenario, our representation allows maintaining accurate reconstruction of surfaces even while they become temporarily occluded by other objects moving in their proximity.
We evaluate the proposed TSDF++ formulation on a public synthetic dataset and demonstrate its ability to preserve reconstructions of occluded surfaces when compared to the standard TSDF map representation.
arXiv Detail & Related papers (2021-05-16T16:15:05Z) - DOPS: Learning to Detect 3D Objects and Predict their 3D Shapes [54.239416488865565]
We propose a fast single-stage 3D object detection method for LIDAR data.
The core novelty of our method is a fast, single-pass architecture that both detects objects in 3D and estimates their shapes.
We find that our proposed method achieves state-of-the-art results by 5% on object detection in ScanNet scenes, and it gets top results by 3.4% in the Open dataset.
arXiv Detail & Related papers (2020-04-02T17:48:50Z) - Extending Maps with Semantic and Contextual Object Information for Robot
Navigation: a Learning-Based Framework using Visual and Depth Cues [12.984393386954219]
This paper addresses the problem of building augmented metric representations of scenes with semantic information from RGB-D images.
We propose a complete framework to create an enhanced map representation of the environment with object-level information.
arXiv Detail & Related papers (2020-03-13T15:05:23Z) - 3D Object Detection on Point Clouds using Local Ground-aware and
Adaptive Representation of scenes' surface [1.9336815376402714]
A novel, adaptive ground-aware, and cost-effective 3D Object Detection pipeline is proposed.
A new state-of-the-art 3D object detection performance among the two-stage Lidar Object Detection pipelines is proposed.
arXiv Detail & Related papers (2020-02-02T05:42:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.