Quantifying point cloud realism through adversarially learned latent
representations
- URL: http://arxiv.org/abs/2109.11775v1
- Date: Fri, 24 Sep 2021 07:17:27 GMT
- Title: Quantifying point cloud realism through adversarially learned latent
representations
- Authors: Larissa T. Triess, David Peter, Stefan A. Baur, J. Marius Z\"ollner
- Abstract summary: This paper presents a novel approach to quantify the realism of local regions in LiDAR point clouds.
The resulting metric can assign a quality score to samples without requiring any task specific annotations.
As one important application, we demonstrate how the local realism score can be used for anomaly detection in point clouds.
- Score: 0.38233569758620056
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Judging the quality of samples synthesized by generative models can be
tedious and time consuming, especially for complex data structures, such as
point clouds. This paper presents a novel approach to quantify the realism of
local regions in LiDAR point clouds. Relevant features are learned from
real-world and synthetic point clouds by training on a proxy classification
task. Inspired by fair networks, we use an adversarial technique to discourage
the encoding of dataset-specific information. The resulting metric can assign a
quality score to samples without requiring any task specific annotations.
In a series of experiments, we confirm the soundness of our metric by
applying it in controllable task setups and on unseen data. Additional
experiments show reliable interpolation capabilities of the metric between data
with varying degree of realism. As one important application, we demonstrate
how the local realism score can be used for anomaly detection in point clouds.
Related papers
- Learning Continuous Implicit Field with Local Distance Indicator for
Arbitrary-Scale Point Cloud Upsampling [55.05706827963042]
Point cloud upsampling aims to generate dense and uniformly distributed point sets from a sparse point cloud.
Previous methods typically split a sparse point cloud into several local patches, upsample patch points, and merge all upsampled patches.
We propose a novel approach that learns an unsigned distance field guided by local priors for point cloud upsampling.
arXiv Detail & Related papers (2023-12-23T01:52:14Z) - PU-Ray: Domain-Independent Point Cloud Upsampling via Ray Marching on Neural Implicit Surface [5.78575346449322]
We propose a new ray-based upsampling approach with an arbitrary rate, where a depth prediction is made for each query ray and its corresponding patch.
Our novel method simulates the sphere-tracing ray marching algorithm on the neural implicit surface defined with an unsigned distance function (UDF)
The rule-based mid-point query sampling method generates more evenly distributed points without requiring an end-to-end model trained using a nearest-neighbor-based reconstruction loss function.
arXiv Detail & Related papers (2023-10-12T22:45:03Z) - Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud
Analysis [74.00441177577295]
Point cloud analysis faces computational system overhead, limiting its application on mobile or edge devices.
This paper explores feature distillation for lightweight point cloud models.
We propose bidirectional knowledge reconfiguration to distill informative contextual knowledge from the teacher to the student.
arXiv Detail & Related papers (2023-10-08T11:32:50Z) - Test-Time Adaptation for Point Cloud Upsampling Using Meta-Learning [17.980649681325406]
We propose a test-time adaption approach to enhance model generality of point cloud upsampling.
The proposed approach leverages meta-learning to explicitly learn network parameters for test-time adaption.
Our framework is generic and can be applied in a plug-and-play manner with existing backbone networks in point cloud upsampling.
arXiv Detail & Related papers (2023-08-31T06:44:59Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - A Realism Metric for Generated LiDAR Point Clouds [2.6205925938720833]
This paper presents a novel metric to quantify the realism of LiDAR point clouds.
Relevant features are learned from real-world and synthetic point clouds by training on a proxy classification task.
In a series of experiments, we demonstrate the application of our metric to determine the realism of generated LiDAR data and compare the realism estimation of our metric to the performance of a segmentation model.
arXiv Detail & Related papers (2022-08-31T16:37:57Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Self-Supervised Arbitrary-Scale Point Clouds Upsampling via Implicit
Neural Representation [79.60988242843437]
We propose a novel approach that achieves self-supervised and magnification-flexible point clouds upsampling simultaneously.
Experimental results demonstrate that our self-supervised learning based scheme achieves competitive or even better performance than supervised learning based state-of-the-art methods.
arXiv Detail & Related papers (2022-04-18T07:18:25Z) - Quasi-Balanced Self-Training on Noise-Aware Synthesis of Object Point
Clouds for Closing Domain Gap [34.590531549797355]
We propose an integrated scheme consisting of physically realistic synthesis of object point clouds via rendering stereo images via projection of speckle patterns onto CAD models.
Experiment results can verify the effectiveness of our method as well as both of its modules for unsupervised domain adaptation on point cloud classification.
arXiv Detail & Related papers (2022-03-08T03:44:49Z) - SPU-Net: Self-Supervised Point Cloud Upsampling by Coarse-to-Fine
Reconstruction with Self-Projection Optimization [52.20602782690776]
It is expensive and tedious to obtain large scale paired sparse-canned point sets for training from real scanned sparse data.
We propose a self-supervised point cloud upsampling network, named SPU-Net, to capture the inherent upsampling patterns of points lying on the underlying object surface.
We conduct various experiments on both synthetic and real-scanned datasets, and the results demonstrate that we achieve comparable performance to the state-of-the-art supervised methods.
arXiv Detail & Related papers (2020-12-08T14:14:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.