Uni-3DAD: GAN-Inversion Aided Universal 3D Anomaly Detection on Model-free Products
- URL: http://arxiv.org/abs/2408.16201v2
- Date: Sun, 10 Nov 2024 01:04:58 GMT
- Title: Uni-3DAD: GAN-Inversion Aided Universal 3D Anomaly Detection on Model-free Products
- Authors: Jiayu Liu, Shancong Mou, Nathan Gaw, Yinan Wang,
- Abstract summary: We propose a unified, unsupervised 3D anomaly detection framework capable of identifying all types of defects on model-free products.
Our method integrates two detection modules: a feature-based detection module and a reconstruction-based detection module.
Results demonstrate that our proposed method outperforms the state-of-the-art methods in identifying incomplete shapes.
- Score: 8.696255342823896
- License:
- Abstract: Anomaly detection is a long-standing challenge in manufacturing systems. Traditionally, anomaly detection has relied on human inspectors. However, 3D point clouds have gained attention due to their robustness to environmental factors and their ability to represent geometric data. Existing 3D anomaly detection methods generally fall into two categories. One compares scanned 3D point clouds with design files, assuming these files are always available. However, such assumptions are often violated in many real-world applications where model-free products exist, such as fresh produce (i.e., ``Cookie", ``Potato", etc.), dentures, bone, etc. The other category compares patches of scanned 3D point clouds with a library of normal patches named memory bank. However, those methods usually fail to detect incomplete shapes, which is a fairly common defect type (i.e., missing pieces of different products). The main challenge is that missing areas in 3D point clouds represent the absence of scanned points. This makes it infeasible to compare the missing region with existing point cloud patches in the memory bank. To address these two challenges, we proposed a unified, unsupervised 3D anomaly detection framework capable of identifying all types of defects on model-free products. Our method integrates two detection modules: a feature-based detection module and a reconstruction-based detection module. Feature-based detection covers geometric defects, such as dents, holes, and cracks, while the reconstruction-based method detects missing regions. Additionally, we employ a One-class Support Vector Machine (OCSVM) to fuse the detection results from both modules. The results demonstrate that (1) our proposed method outperforms the state-of-the-art methods in identifying incomplete shapes and (2) it still maintains comparable performance with the SOTA methods in detecting all other types of anomalies.
Related papers
- 3CAD: A Large-Scale Real-World 3C Product Dataset for Unsupervised Anomaly [22.150521360544744]
We propose a new large-scale anomaly detection dataset called 3CAD.
3CAD includes eight different types of manufactured parts, totaling 27,039 high- resolution images labeled with pixel-level anomalies.
This is the largest and first anomaly de-tection dataset dedicated to 3C product quality control.
arXiv Detail & Related papers (2025-02-09T03:37:54Z) - Towards Zero-shot 3D Anomaly Localization [58.62650061201283]
3DzAL is a novel patch-level contrastive learning framework for 3D anomaly detection and localization.
We show that 3DzAL outperforms the state-of-the-art anomaly detection and localization performance.
arXiv Detail & Related papers (2024-12-05T16:25:27Z) - R3D-AD: Reconstruction via Diffusion for 3D Anomaly Detection [12.207437451118036]
3D anomaly detection plays a crucial role in monitoring parts for localized inherent defects in precision manufacturing.
Embedding-based and reconstruction-based approaches are among the most popular and successful methods.
We propose R3D-AD, reconstructing anomalous point clouds by diffusion model for precise 3D anomaly detection.
arXiv Detail & Related papers (2024-07-15T16:10:58Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - Real3D-AD: A Dataset of Point Cloud Anomaly Detection [75.56719157477661]
We introduce Real3D-AD, a challenging high-precision point cloud anomaly detection dataset.
With 1,254 high-resolution 3D items from forty thousand to millions of points for each item, Real3D-AD is the largest dataset for high-precision 3D industrial anomaly detection.
We present a comprehensive benchmark for Real3D-AD, revealing the absence of baseline methods for high-precision point cloud anomaly detection.
arXiv Detail & Related papers (2023-09-23T00:43:38Z) - Toward Unsupervised 3D Point Cloud Anomaly Detection using Variational
Autoencoder [10.097126085083827]
We present an end-to-end unsupervised anomaly detection framework for 3D point clouds.
We propose a deep variational autoencoder-based unsupervised anomaly detection network adapted to the 3D point cloud and an anomaly score specifically for 3D point clouds.
arXiv Detail & Related papers (2023-04-07T00:02:37Z) - The MVTec 3D-AD Dataset for Unsupervised 3D Anomaly Detection and
Localization [17.437967037670813]
We introduce the first comprehensive 3D dataset for the task of unsupervised anomaly detection and localization.
It is inspired by real-world visual inspection scenarios in which a model has to detect various types of defects on manufactured products.
arXiv Detail & Related papers (2021-12-16T17:35:51Z) - Embracing Single Stride 3D Object Detector with Sparse Transformer [63.179720817019096]
In LiDAR-based 3D object detection for autonomous driving, the ratio of the object size to input scene size is significantly smaller compared to 2D detection cases.
Many 3D detectors directly follow the common practice of 2D detectors, which downsample the feature maps even after quantizing the point clouds.
We propose Single-stride Sparse Transformer (SST) to maintain the original resolution from the beginning to the end of the network.
arXiv Detail & Related papers (2021-12-13T02:12:02Z) - 3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D
Object Detection [111.32054128362427]
In safety-critical settings, robustness on out-of-distribution and long-tail samples is fundamental to circumvent dangerous issues.
We substantially improve the generalization of 3D object detectors to out-of-domain data by taking into account deformed point clouds during training.
We propose and share open source CrashD: a synthetic dataset of realistic damaged and rare cars.
arXiv Detail & Related papers (2021-12-09T08:50:54Z) - Anchor-free 3D Single Stage Detector with Mask-Guided Attention for
Point Cloud [79.39041453836793]
We develop a novel single-stage 3D detector for point clouds in an anchor-free manner.
We overcome this by converting the voxel-based sparse 3D feature volumes into the sparse 2D feature maps.
We propose an IoU-based detection confidence re-calibration scheme to improve the correlation between the detection confidence score and the accuracy of the bounding box regression.
arXiv Detail & Related papers (2021-08-08T13:42:13Z) - Boundary-Aware Dense Feature Indicator for Single-Stage 3D Object
Detection from Point Clouds [32.916690488130506]
We propose a universal module that helps 3D detectors focus on the densest region of the point clouds in a boundary-aware manner.
Experiments on KITTI dataset show that DENFI improves the performance of the baseline single-stage detector remarkably.
arXiv Detail & Related papers (2020-04-01T01:21:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.