Looking 3D: Anomaly Detection with 2D-3D Alignment
- URL: http://arxiv.org/abs/2406.19393v1
- Date: Thu, 27 Jun 2024 17:59:46 GMT
- Title: Looking 3D: Anomaly Detection with 2D-3D Alignment
- Authors: Ankan Bhunia, Changjian Li, Hakan Bilen,
- Abstract summary: This paper introduces a new conditional anomaly detection problem, which involves identifying anomalies in a query image by comparing it to a reference shape.
We have created a large dataset, BrokenChairs-180K, consisting of around 180K images, with diverse anomalies, geometries, and textures paired with 8,143 reference 3D shapes.
Our approach has been rigorously evaluated through comprehensive experiments, serving as a benchmark for future research in this domain.
- Score: 27.474201071615187
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic anomaly detection based on visual cues holds practical significance in various domains, such as manufacturing and product quality assessment. This paper introduces a new conditional anomaly detection problem, which involves identifying anomalies in a query image by comparing it to a reference shape. To address this challenge, we have created a large dataset, BrokenChairs-180K, consisting of around 180K images, with diverse anomalies, geometries, and textures paired with 8,143 reference 3D shapes. To tackle this task, we have proposed a novel transformer-based approach that explicitly learns the correspondence between the query image and reference 3D shape via feature alignment and leverages a customized attention mechanism for anomaly detection. Our approach has been rigorously evaluated through comprehensive experiments, serving as a benchmark for future research in this domain.
Related papers
- R3D-AD: Reconstruction via Diffusion for 3D Anomaly Detection [12.207437451118036]
3D anomaly detection plays a crucial role in monitoring parts for localized inherent defects in precision manufacturing.
Embedding-based and reconstruction-based approaches are among the most popular and successful methods.
We propose R3D-AD, reconstructing anomalous point clouds by diffusion model for precise 3D anomaly detection.
arXiv Detail & Related papers (2024-07-15T16:10:58Z) - SplatPose & Detect: Pose-Agnostic 3D Anomaly Detection [18.796625355398252]
State-of-the-art algorithms are able to detect defects in increasingly difficult settings and data modalities.
We propose the novel 3D Gaussian splatting-based framework SplatPose which accurately estimates the pose of unseen views in a differentiable manner.
We achieve state-of-the-art results in both training and inference speed, and detection performance, even when using less training data than competing methods.
arXiv Detail & Related papers (2024-04-10T08:48:09Z) - UniMODE: Unified Monocular 3D Object Detection [70.27631528933482]
We build a detector based on the bird's-eye-view (BEV) detection paradigm.
We propose an uneven BEV grid design to handle the convergence instability caused by the challenges.
A unified detector UniMODE is derived, which surpasses the previous state-of-the-art on the challenging Omni3D dataset.
arXiv Detail & Related papers (2024-02-28T18:59:31Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - 3DiffTection: 3D Object Detection with Geometry-Aware Diffusion Features [70.50665869806188]
3DiffTection is a state-of-the-art method for 3D object detection from single images.
We fine-tune a diffusion model to perform novel view synthesis conditioned on a single image.
We further train the model on target data with detection supervision.
arXiv Detail & Related papers (2023-11-07T23:46:41Z) - S$^3$-MonoDETR: Supervised Shape&Scale-perceptive Deformable Transformer
for Monocular 3D Object Detection [22.424834025925076]
"Supervised Shape&Scale-perceptive Deformable Attention" (S$3$-DA) module for monocular 3D object detection.
This paper proposes a novel "Supervised Shape&Scale-perceptive Deformable Attention" (S$3$-DA) module for monocular 3D object detection.
arXiv Detail & Related papers (2023-09-02T12:36:38Z) - DETR4D: Direct Multi-View 3D Object Detection with Sparse Attention [50.11672196146829]
3D object detection with surround-view images is an essential task for autonomous driving.
We propose DETR4D, a Transformer-based framework that explores sparse attention and direct feature query for 3D object detection in multi-view images.
arXiv Detail & Related papers (2022-12-15T14:18:47Z) - Homography Loss for Monocular 3D Object Detection [54.04870007473932]
A differentiable loss function, termed as Homography Loss, is proposed to achieve the goal, which exploits both 2D and 3D information.
Our method yields the best performance compared with the other state-of-the-arts by a large margin on KITTI 3D datasets.
arXiv Detail & Related papers (2022-04-02T03:48:03Z) - The MVTec 3D-AD Dataset for Unsupervised 3D Anomaly Detection and
Localization [17.437967037670813]
We introduce the first comprehensive 3D dataset for the task of unsupervised anomaly detection and localization.
It is inspired by real-world visual inspection scenarios in which a model has to detect various types of defects on manufactured products.
arXiv Detail & Related papers (2021-12-16T17:35:51Z) - Geometry-aware data augmentation for monocular 3D object detection [18.67567745336633]
This paper focuses on monocular 3D object detection, one of the essential modules in autonomous driving systems.
A key challenge is that the depth recovery problem is ill-posed in monocular data.
We conduct a thorough analysis to reveal how existing methods fail to robustly estimate depth when different geometry shifts occur.
We convert the aforementioned manipulations into four corresponding 3D-aware data augmentation techniques.
arXiv Detail & Related papers (2021-04-12T23:12:48Z) - Deep Continuous Fusion for Multi-Sensor 3D Object Detection [103.5060007382646]
We propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization.
We design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution.
arXiv Detail & Related papers (2020-12-20T18:43:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.