Effective Utilisation of Multiple Open-Source Datasets to Improve
Generalisation Performance of Point Cloud Segmentation Models
- URL: http://arxiv.org/abs/2211.15877v1
- Date: Tue, 29 Nov 2022 02:31:01 GMT
- Title: Effective Utilisation of Multiple Open-Source Datasets to Improve
Generalisation Performance of Point Cloud Segmentation Models
- Authors: Matthew Howe, Boris Repasky, Timothy Payne
- Abstract summary: Semantic segmentation of aerial point cloud data can be utilised to differentiate which points belong to classes such as ground, buildings, or vegetation.
Point clouds generated from aerial sensors mounted to drones or planes can utilise LIDAR sensors or cameras along with photogrammetry.
We show that a naive combination of datasets produces a model with improved generalisation performance as expected.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Semantic segmentation of aerial point cloud data can be utilised to
differentiate which points belong to classes such as ground, buildings, or
vegetation. Point clouds generated from aerial sensors mounted to drones or
planes can utilise LIDAR sensors or cameras along with photogrammetry. Each
method of data collection contains unique characteristics which can be learnt
independently with state-of-the-art point cloud segmentation models. Utilising
a single point cloud segmentation model can be desirable in situations where
point cloud sensors, quality, and structures can change. In these situations it
is desirable that the segmentation model can handle these variations with
predictable and consistent results. Although deep learning can segment point
clouds accurately it often suffers in generalisation, adapting poorly to data
which is different than the training data. To address this issue, we propose to
utilise multiple available open source fully annotated datasets to train and
test models that are better able to generalise.
In this paper we discuss the combination of these datasets into a simple
training set and challenging test set. Combining datasets allows us to evaluate
generalisation performance on known variations in the point cloud data. We show
that a naive combination of datasets produces a model with improved
generalisation performance as expected. We go on to show that an improved
sampling strategy which decreases sampling variations increases the
generalisation performance substantially on top of this. Experiments to find
which sample variations give this performance boost found that consistent
densities are the most important.
Related papers
- InvariantOODG: Learning Invariant Features of Point Clouds for
Out-of-Distribution Generalization [17.96808017359983]
We propose InvariantOODG, which learns invariability between point clouds with different distributions.
We define a set of learnable anchor points that locate the most useful local regions and two types of transformations to augment the input point clouds.
The experimental results demonstrate the effectiveness of the proposed model on 3D domain generalization benchmarks.
arXiv Detail & Related papers (2024-01-08T09:41:22Z) - Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud
Analysis [74.00441177577295]
Point cloud analysis faces computational system overhead, limiting its application on mobile or edge devices.
This paper explores feature distillation for lightweight point cloud models.
We propose bidirectional knowledge reconfiguration to distill informative contextual knowledge from the teacher to the student.
arXiv Detail & Related papers (2023-10-08T11:32:50Z) - Test-Time Adaptation for Point Cloud Upsampling Using Meta-Learning [17.980649681325406]
We propose a test-time adaption approach to enhance model generality of point cloud upsampling.
The proposed approach leverages meta-learning to explicitly learn network parameters for test-time adaption.
Our framework is generic and can be applied in a plug-and-play manner with existing backbone networks in point cloud upsampling.
arXiv Detail & Related papers (2023-08-31T06:44:59Z) - Synthetic-to-Real Domain Generalized Semantic Segmentation for 3D Indoor
Point Clouds [69.64240235315864]
This paper introduces the synthetic-to-real domain generalization setting to this task.
The domain gap between synthetic and real-world point cloud data mainly lies in the different layouts and point patterns.
Experiments on the synthetic-to-real benchmark demonstrate that both CINMix and multi-prototypes can narrow the distribution gap.
arXiv Detail & Related papers (2022-12-09T05:07:43Z) - Point-Syn2Real: Semi-Supervised Synthetic-to-Real Cross-Domain Learning
for Object Classification in 3D Point Clouds [14.056949618464394]
Object classification using LiDAR 3D point cloud data is critical for modern applications such as autonomous driving.
We propose a semi-supervised cross-domain learning approach that does not rely on manual annotations of point clouds.
We introduce Point-Syn2Real, a new benchmark dataset for cross-domain learning on point clouds.
arXiv Detail & Related papers (2022-10-31T01:53:51Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Unsupervised Representation Learning for 3D Point Cloud Data [66.92077180228634]
We propose a simple yet effective approach for unsupervised point cloud learning.
In particular, we identify a very useful transformation which generates a good contrastive version of an original point cloud.
We conduct experiments on three downstream tasks which are 3D object classification, shape part segmentation and scene segmentation.
arXiv Detail & Related papers (2021-10-13T10:52:45Z) - Point Cloud Pre-training by Mixing and Disentangling [35.18101910728478]
Mixing and Disentangling (MD) is a self-supervised learning approach for point cloud pre-training.
We show that the encoder + ours (MD) significantly surpasses that of the encoder trained from scratch and converges quickly.
We hope this self-supervised learning attempt on point clouds can pave the way for reducing the deeply-learned model dependence on large-scale labeled data.
arXiv Detail & Related papers (2021-09-01T15:52:18Z) - Dataset Cartography: Mapping and Diagnosing Datasets with Training
Dynamics [118.75207687144817]
We introduce Data Maps, a model-based tool to characterize and diagnose datasets.
We leverage a largely ignored source of information: the behavior of the model on individual instances during training.
Our results indicate that a shift in focus from quantity to quality of data could lead to robust models and improved out-of-distribution generalization.
arXiv Detail & Related papers (2020-09-22T20:19:41Z) - Airborne LiDAR Point Cloud Classification with Graph Attention
Convolution Neural Network [5.69168146446103]
We present a graph attention convolution neural network (GACNN) that can be directly applied to the classification of unstructured 3D point clouds obtained by airborne LiDAR.
Based on the proposed graph attention convolution module, we further design an end-to-end encoder-decoder network, named GACNN, to capture multiscale features of the point clouds.
Experiments on the ISPRS 3D labeling dataset show that the proposed model achieves a new state-of-the-art performance in terms of average F1 score (71.5%) and a satisfying overall accuracy (83.2%)
arXiv Detail & Related papers (2020-04-20T05:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.