Toward Unsupervised 3D Point Cloud Anomaly Detection using Variational
Autoencoder
- URL: http://arxiv.org/abs/2304.03420v1
- Date: Fri, 7 Apr 2023 00:02:37 GMT
- Title: Toward Unsupervised 3D Point Cloud Anomaly Detection using Variational
Autoencoder
- Authors: Mana Masuda, Ryo Hachiuma, Ryo Fujii, Hideo Saito, Yusuke Sekikawa
- Abstract summary: We present an end-to-end unsupervised anomaly detection framework for 3D point clouds.
We propose a deep variational autoencoder-based unsupervised anomaly detection network adapted to the 3D point cloud and an anomaly score specifically for 3D point clouds.
- Score: 10.097126085083827
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this paper, we present an end-to-end unsupervised anomaly detection
framework for 3D point clouds. To the best of our knowledge, this is the first
work to tackle the anomaly detection task on a general object represented by a
3D point cloud. We propose a deep variational autoencoder-based unsupervised
anomaly detection network adapted to the 3D point cloud and an anomaly score
specifically for 3D point clouds. To verify the effectiveness of the model, we
conducted extensive experiments on the ShapeNet dataset. Through quantitative
and qualitative evaluation, we demonstrate that the proposed method outperforms
the baseline method. Our code is available at
https://github.com/llien30/point_cloud_anomaly_detection.
Related papers
- Point Cloud Novelty Detection Based on Latent Representations of a General Feature Extractor [9.11903730548763]
We propose an effective unsupervised 3D point cloud novelty detection approach, leveraging a general point cloud feature extractor and a one-class classifier.
Compared to existing methods measuring the reconstruction error in 3D coordinate space, our approach utilizes latent representations where the shape information is condensed.
We confirm that our general feature extractor can extract shape features of unseen categories, eliminating the need for autoencoder re-training and reducing the computational burden.
arXiv Detail & Related papers (2024-10-13T14:42:43Z) - Towards Scalable 3D Anomaly Detection and Localization: A Benchmark via
3D Anomaly Synthesis and A Self-Supervised Learning Network [22.81108868492533]
We propose a 3D anomaly synthesis pipeline to adapt existing large-scale 3Dmodels for 3D anomaly detection.
Anomaly-ShapeNet consists of 1600 point cloud samples under 40 categories, which provides a rich and varied collection of data.
We also propose a self-supervised method, i.e., Iterative Mask Reconstruction Network (IMRNet), to enable scalable representation learning for 3D anomaly localization.
arXiv Detail & Related papers (2023-11-25T01:45:09Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - 3D Cascade RCNN: High Quality Object Detection in Point Clouds [122.42455210196262]
We present 3D Cascade RCNN, which allocates multiple detectors based on the voxelized point clouds in a cascade paradigm.
We validate the superiority of our proposed 3D Cascade RCNN, when comparing to state-of-the-art 3D object detection techniques.
arXiv Detail & Related papers (2022-11-15T15:58:36Z) - 3DVerifier: Efficient Robustness Verification for 3D Point Cloud Models [17.487852393066458]
Existing verification method for point cloud model is time-expensive and computationally unattainable on large networks.
We propose 3DVerifier to tackle both challenges by adopting a linear relaxation function to bound the multiplication layer and combining forward and backward propagation.
Our approach achieves an orders-of-magnitude improvement in verification efficiency for the large network, and the obtained certified bounds are also significantly tighter than the state-of-the-art verifiers.
arXiv Detail & Related papers (2022-07-15T15:31:16Z) - Deep Point Cloud Reconstruction [74.694733918351]
Point cloud obtained from 3D scanning is often sparse, noisy, and irregular.
To cope with these issues, recent studies have been separately conducted to densify, denoise, and complete inaccurate point cloud.
We propose a deep point cloud reconstruction network consisting of two stages: 1) a 3D sparse stacked-hourglass network as for the initial densification and denoising, 2) a refinement via transformers converting the discrete voxels into 3D points.
arXiv Detail & Related papers (2021-11-23T07:53:28Z) - Anchor-free 3D Single Stage Detector with Mask-Guided Attention for
Point Cloud [79.39041453836793]
We develop a novel single-stage 3D detector for point clouds in an anchor-free manner.
We overcome this by converting the voxel-based sparse 3D feature volumes into the sparse 2D feature maps.
We propose an IoU-based detection confidence re-calibration scheme to improve the correlation between the detection confidence score and the accuracy of the bounding box regression.
arXiv Detail & Related papers (2021-08-08T13:42:13Z) - ST3D: Self-training for Unsupervised Domain Adaptation on 3D
ObjectDetection [78.71826145162092]
We present a new domain adaptive self-training pipeline, named ST3D, for unsupervised domain adaptation on 3D object detection from point clouds.
Our ST3D achieves state-of-the-art performance on all evaluated datasets and even surpasses fully supervised results on KITTI 3D object detection benchmark.
arXiv Detail & Related papers (2021-03-09T10:51:24Z) - Dynamic Edge Weights in Graph Neural Networks for 3D Object Detection [0.0]
We propose an attention based feature aggregation technique in graph neural network (GNN) for detecting objects in LiDAR scan.
In each layer of the GNN, apart from the linear transformation which maps the per node input features to the corresponding higher level features, a per node masked attention is also performed.
The experiments on KITTI dataset show that our method yields comparable results for 3D object detection.
arXiv Detail & Related papers (2020-09-17T12:56:17Z) - InfoFocus: 3D Object Detection for Autonomous Driving with Dynamic
Information Modeling [65.47126868838836]
We propose a novel 3D object detection framework with dynamic information modeling.
Coarse predictions are generated in the first stage via a voxel-based region proposal network.
Experiments are conducted on the large-scale nuScenes 3D detection benchmark.
arXiv Detail & Related papers (2020-07-16T18:27:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.