Optimal Transport for Change Detection on LiDAR Point Clouds
- URL: http://arxiv.org/abs/2302.07025v5
- Date: Wed, 8 Nov 2023 11:10:01 GMT
- Title: Optimal Transport for Change Detection on LiDAR Point Clouds
- Authors: Marco Fiorucci, Peter Naylor, Makoto Yamada
- Abstract summary: Unsupervised change detection between airborne LiDAR data points is difficult due to unmatching spatial support and noise from the acquisition system.
We propose an unsupervised approach based on the computation of the transport of 3D LiDAR points over two temporal supports.
Our method allows for unsupervised multi-class classification and outperforms the previous state-of-the-art unsupervised approaches by a significant margin.
- Score: 16.552050876277242
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised change detection between airborne LiDAR data points, taken at
separate times over the same location, can be difficult due to unmatching
spatial support and noise from the acquisition system. Most current approaches
to detect changes in point clouds rely heavily on the computation of Digital
Elevation Models (DEM) images and supervised methods. Obtaining a DEM leads to
LiDAR informational loss due to pixelisation, and supervision requires large
amounts of labelled data often unavailable in real-world scenarios. We propose
an unsupervised approach based on the computation of the transport of 3D LiDAR
points over two temporal supports. The method is based on unbalanced optimal
transport and can be generalised to any change detection problem with LiDAR
data. We apply our approach to publicly available datasets for monitoring urban
sprawling in various noise and resolution configurations that mimic several
sensors used in practice. Our method allows for unsupervised multi-class
classification and outperforms the previous state-of-the-art unsupervised
approaches by a significant margin.
Related papers
- OccNeRF: Advancing 3D Occupancy Prediction in LiDAR-Free Environments [77.0399450848749]
We propose an OccNeRF method for training occupancy networks without 3D supervision.
We parameterize the reconstructed occupancy fields and reorganize the sampling strategy to align with the cameras' infinite perceptive range.
For semantic occupancy prediction, we design several strategies to polish the prompts and filter the outputs of a pretrained open-vocabulary 2D segmentation model.
arXiv Detail & Related papers (2023-12-14T18:58:52Z) - Robust 3D Object Detection from LiDAR-Radar Point Clouds via Cross-Modal
Feature Augmentation [7.364627166256136]
This paper presents a novel framework for robust 3D object detection from point clouds via cross-modal hallucination.
We introduce multiple alignments on both spatial and feature levels to achieve simultaneous backbone refinement and hallucination generation.
Experiments on the View-of-Delft dataset show that our proposed method outperforms the state-of-the-art (SOTA) methods for both radar and LiDAR object detection.
arXiv Detail & Related papers (2023-09-29T15:46:59Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Implicit neural representation for change detection [15.741202788959075]
Most commonly used approaches to detecting changes in point clouds are based on supervised methods.
We propose an unsupervised approach that comprises two components: Implicit Neural Representation (INR) for continuous shape reconstruction and a Gaussian Mixture Model for categorising changes.
We apply our method to a benchmark dataset comprising simulated LiDAR point clouds for urban sprawling.
arXiv Detail & Related papers (2023-07-28T09:26:00Z) - Improving LiDAR 3D Object Detection via Range-based Point Cloud Density
Optimization [13.727464375608765]
Existing 3D object detectors tend to perform well on the point cloud regions closer to the LiDAR sensor as opposed to on regions that are farther away.
We observe that there is a learning bias in detection models towards the dense objects near the sensor and show that the detection performance can be improved by simply manipulating the input point cloud density at different distance ranges.
arXiv Detail & Related papers (2023-06-09T04:11:43Z) - Deep Metric Learning for Unsupervised Remote Sensing Change Detection [60.89777029184023]
Remote Sensing Change Detection (RS-CD) aims to detect relevant changes from Multi-Temporal Remote Sensing Images (MT-RSIs)
The performance of existing RS-CD methods is attributed to training on large annotated datasets.
This paper proposes an unsupervised CD method based on deep metric learning that can deal with both of these issues.
arXiv Detail & Related papers (2023-03-16T17:52:45Z) - Unsupervised 4D LiDAR Moving Object Segmentation in Stationary Settings
with Multivariate Occupancy Time Series [62.997667081978825]
We address the problem of unsupervised moving object segmentation (MOS) in 4D LiDAR data recorded from a stationary sensor, where no ground truth annotations are involved.
We propose a novel 4D LiDAR representation based on a time series that relaxes the problem of unsupervised MOS.
Experiments on stationary scenes from the Raw KITTI dataset show that our fully unsupervised approach achieves performance that is comparable to that of supervised state-of-the-art approaches.
arXiv Detail & Related papers (2022-12-30T14:48:14Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - Weighted Bayesian Gaussian Mixture Model for Roadside LiDAR Object
Detection [0.5156484100374059]
Background modeling is widely used for intelligent surveillance systems to detect moving targets by subtracting the static background components.
Most roadside LiDAR object detection methods filter out foreground points by comparing new data points to pre-trained background references.
In this paper, we transform the raw LiDAR data into a structured representation based on the elevation and azimuth value of each LiDAR point.
The proposed method was compared against two state-of-the-art roadside LiDAR background models, computer vision benchmark, and deep learning baselines, evaluated at point, object, and path levels under heavy traffic and challenging weather.
arXiv Detail & Related papers (2022-04-20T22:48:05Z) - Roadside Lidar Vehicle Detection and Tracking Using Range And Intensity
Background Subtraction [0.0]
We present the solution of roadside LiDAR object detection using a combination of two unsupervised learning algorithms.
The method was validated against a commercial traffic data collection platform.
arXiv Detail & Related papers (2022-01-13T00:54:43Z) - SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural
Networks [81.64530401885476]
We propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties.
Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns.
We evaluate our method's performances on two large-scale datasets, i.e., KITTI and Apollo-SouthBay.
arXiv Detail & Related papers (2020-10-19T09:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.