DAS3D: Dual-modality Anomaly Synthesis for 3D Anomaly Detection
- URL: http://arxiv.org/abs/2410.09821v1
- Date: Sun, 13 Oct 2024 12:38:16 GMT
- Title: DAS3D: Dual-modality Anomaly Synthesis for 3D Anomaly Detection
- Authors: Kecen Li, Bingquan Dai, Jingjing Fu, Xinwen Hou,
- Abstract summary: We propose a novel dual-modality augmentation method for 3D anomaly synthesis.
We introduce a reconstruction-based discriminative anomaly detection network.
Our method outperforms the state-of-the-art methods on detection precision and achieves competitive segmentation performance.
- Score: 5.062312533373299
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthesizing anomaly samples has proven to be an effective strategy for self-supervised 2D industrial anomaly detection. However, this approach has been rarely explored in multi-modality anomaly detection, particularly involving 3D and RGB images. In this paper, we propose a novel dual-modality augmentation method for 3D anomaly synthesis, which is simple and capable of mimicking the characteristics of 3D defects. Incorporating with our anomaly synthesis method, we introduce a reconstruction-based discriminative anomaly detection network, in which a dual-modal discriminator is employed to fuse the original and reconstructed embedding of two modalities for anomaly detection. Additionally, we design an augmentation dropout mechanism to enhance the generalizability of the discriminator. Extensive experiments show that our method outperforms the state-of-the-art methods on detection precision and achieves competitive segmentation performance on both MVTec 3D-AD and Eyescandies datasets.
Related papers
- DualAnoDiff: Dual-Interrelated Diffusion Model for Few-Shot Anomaly Image Generation [40.257604426546216]
The performance of anomaly inspection in industrial manufacturing is constrained by the scarcity of anomaly data.
Existing anomaly generation methods suffer from limited diversity in the generated anomalies.
We propose DualAnoDiff, a novel diffusion-based few-shot anomaly image generation model.
arXiv Detail & Related papers (2024-08-24T08:09:32Z) - R3D-AD: Reconstruction via Diffusion for 3D Anomaly Detection [12.207437451118036]
3D anomaly detection plays a crucial role in monitoring parts for localized inherent defects in precision manufacturing.
Embedding-based and reconstruction-based approaches are among the most popular and successful methods.
We propose R3D-AD, reconstructing anomalous point clouds by diffusion model for precise 3D anomaly detection.
arXiv Detail & Related papers (2024-07-15T16:10:58Z) - M3DM-NR: RGB-3D Noisy-Resistant Industrial Anomaly Detection via Multimodal Denoising [63.39134873744748]
Existing industrial anomaly detection methods primarily concentrate on unsupervised learning with pristine RGB images.
This paper proposes a novel noise-resistant M3DM-NR framework to leverage strong multi-modal discriminative capabilities of CLIP.
Extensive experiments show that M3DM-NR outperforms state-of-the-art methods in 3D-RGB multi-modal noisy anomaly detection.
arXiv Detail & Related papers (2024-06-04T12:33:02Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - Generating and Reweighting Dense Contrastive Patterns for Unsupervised
Anomaly Detection [59.34318192698142]
We introduce a prior-less anomaly generation paradigm and develop an innovative unsupervised anomaly detection framework named GRAD.
PatchDiff effectively expose various types of anomaly patterns.
experiments on both MVTec AD and MVTec LOCO datasets also support the aforementioned observation.
arXiv Detail & Related papers (2023-12-26T07:08:06Z) - Achieving state-of-the-art performance in the Medical
Out-of-Distribution (MOOD) challenge using plausible synthetic anomalies [0.5677301320664404]
Unsupervised anomaly detection, or Out-of-Distribution detection, aims at identifying anomalous samples.
Our method builds upon the self-supervised strategy consisting on training a segmentation network to identify local synthetic anomalies.
Our contributions improve the synthetic anomaly generation process, making synthetic anomalies more heterogeneous.
arXiv Detail & Related papers (2023-08-02T20:16:13Z) - Multimodal Industrial Anomaly Detection via Hybrid Fusion [59.16333340582885]
We propose a novel multimodal anomaly detection method with hybrid fusion scheme.
Our model outperforms the state-of-the-art (SOTA) methods on both detection and segmentation precision on MVTecD-3 AD dataset.
arXiv Detail & Related papers (2023-03-01T15:48:27Z) - DSR -- A dual subspace re-projection network for surface anomaly
detection [9.807317669057175]
We propose an architecture based on quantized feature space representation with dual decoders, DSR, that avoids the image-level anomaly synthesis requirement.
The experiments on the challenging real-world KSDD2 dataset show that DSR significantly outperforms other unsupervised surface anomaly detection methods.
arXiv Detail & Related papers (2022-08-02T15:15:29Z) - Uncertainty-Aware Adaptation for Self-Supervised 3D Human Pose
Estimation [70.32536356351706]
We introduce MRP-Net that constitutes a common deep network backbone with two output heads subscribing to two diverse configurations.
We derive suitable measures to quantify prediction uncertainty at both pose and joint level.
We present a comprehensive evaluation of the proposed approach and demonstrate state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-03-29T07:14:58Z) - The KFIoU Loss for Rotated Object Detection [115.334070064346]
In this paper, we argue that one effective alternative is to devise an approximate loss who can achieve trend-level alignment with SkewIoU loss.
Specifically, we model the objects as Gaussian distribution and adopt Kalman filter to inherently mimic the mechanism of SkewIoU.
The resulting new loss called KFIoU is easier to implement and works better compared with exact SkewIoU.
arXiv Detail & Related papers (2022-01-29T10:54:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.