City-scale Scene Change Detection using Point Clouds
- URL: http://arxiv.org/abs/2103.14314v1
- Date: Fri, 26 Mar 2021 08:04:13 GMT
- Title: City-scale Scene Change Detection using Point Clouds
- Authors: Zi Jian Yew and Gim Hee Lee
- Abstract summary: We propose a method for detecting structural changes in a city using images captured from mounted cameras over two different times.
A direct comparison of the two point clouds for change detection is not ideal due to inaccurate geo-location information.
To circumvent this problem, we propose a deep learning-based non-rigid registration on the point clouds.
Experiments show that our method is able to detect scene changes effectively, even in the presence of viewpoint and illumination differences.
- Score: 71.73273007900717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a method for detecting structural changes in a city using images
captured from vehicular mounted cameras over traversals at two different times.
We first generate 3D point clouds for each traversal from the images and
approximate GNSS/INS readings using Structure-from-Motion (SfM). A direct
comparison of the two point clouds for change detection is not ideal due to
inaccurate geo-location information and possible drifts in the SfM. To
circumvent this problem, we propose a deep learning-based non-rigid
registration on the point clouds which allows us to compare the point clouds
for structural change detection in the scene. Furthermore, we introduce a dual
thresholding check and post-processing step to enhance the robustness of our
method. We collect two datasets for the evaluation of our approach. Experiments
show that our method is able to detect scene changes effectively, even in the
presence of viewpoint and illumination differences.
Related papers
- P2P-Bridge: Diffusion Bridges for 3D Point Cloud Denoising [81.92854168911704]
We tackle the task of point cloud denoising through a novel framework that adapts Diffusion Schr"odinger bridges to points clouds.
Experiments on object datasets show that P2P-Bridge achieves significant improvements over existing methods.
arXiv Detail & Related papers (2024-08-29T08:00:07Z) - C-NERF: Representing Scene Changes as Directional Consistency
Difference-based NeRF [3.0023333254953406]
We aim to detect the changes caused by object variations in a scene represented by the neural radiance fields (NeRFs)
Given an arbitrary view and two sets of scene images captured at different timestamps, we can predict the scene changes in that view.
Our approach surpasses state-of-the-art 2D change detection and NeRF-based methods by a significant margin.
arXiv Detail & Related papers (2023-12-05T13:27:12Z) - Point Cloud Denoising and Outlier Detection with Local Geometric
Structure by Dynamic Graph CNN [0.0]
Point clouds are attracting attention as a media format for 3D space.
PointCleanNet is an effective method for point cloud denoising and outlier detection.
We propose two types of graph convolutional layer designed based on the Dynamic Graph CNN.
arXiv Detail & Related papers (2023-10-11T10:50:15Z) - Irregular Change Detection in Sparse Bi-Temporal Point Clouds using
Learned Place Recognition Descriptors and Point-to-Voxel Comparison [0.0]
This article proposes an innovative approach for change detection in 3D point clouds.
It uses deep learned place recognition descriptors and irregular object extraction based on voxel-to-point comparison.
The proposed method was successfully evaluated in real-world field experiments.
arXiv Detail & Related papers (2023-06-27T12:22:25Z) - (LC)$^2$: LiDAR-Camera Loop Constraints For Cross-Modal Place
Recognition [0.9449650062296824]
We propose a novel cross-matching method, called (LC)$2$, for achieving LiDAR localization without a prior point cloud map.
Network is trained to extract localization descriptors from disparity and range images.
We demonstrate that LiDAR-based navigation systems could be optimized from image databases and vice versa.
arXiv Detail & Related papers (2023-04-17T23:20:16Z) - Active Gaze Control for Foveal Scene Exploration [124.11737060344052]
We propose a methodology to emulate how humans and robots with foveal cameras would explore a scene.
The proposed method achieves an increase in detection F1-score of 2-3 percentage points for the same number of gaze shifts.
arXiv Detail & Related papers (2022-08-24T14:59:28Z) - Deep Point Cloud Reconstruction [74.694733918351]
Point cloud obtained from 3D scanning is often sparse, noisy, and irregular.
To cope with these issues, recent studies have been separately conducted to densify, denoise, and complete inaccurate point cloud.
We propose a deep point cloud reconstruction network consisting of two stages: 1) a 3D sparse stacked-hourglass network as for the initial densification and denoising, 2) a refinement via transformers converting the discrete voxels into 3D points.
arXiv Detail & Related papers (2021-11-23T07:53:28Z) - SCTN: Sparse Convolution-Transformer Network for Scene Flow Estimation [71.2856098776959]
Estimating 3D motions for point clouds is challenging, since a point cloud is unordered and its density is significantly non-uniform.
We propose a novel architecture named Sparse Convolution-Transformer Network (SCTN) that equips the sparse convolution with the transformer.
We show that the learned relation-based contextual information is rich and helpful for matching corresponding points, benefiting scene flow estimation.
arXiv Detail & Related papers (2021-05-10T15:16:14Z) - DeepI2P: Image-to-Point Cloud Registration via Deep Classification [71.3121124994105]
DeepI2P is a novel approach for cross-modality registration between an image and a point cloud.
Our method estimates the relative rigid transformation between the coordinate frames of the camera and Lidar.
We circumvent the difficulty by converting the registration problem into a classification and inverse camera projection optimization problem.
arXiv Detail & Related papers (2021-04-08T04:27:32Z) - R-AGNO-RPN: A LIDAR-Camera Region Deep Network for Resolution-Agnostic
Detection [3.4761212729163313]
R-AGNO-RPN, a region proposal network built on fusion of 3D point clouds and RGB images is proposed.
Our approach is designed to be also applied on low point cloud resolutions.
arXiv Detail & Related papers (2020-12-10T15:22:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.