C3D-AD: Toward Continual 3D Anomaly Detection via Kernel Attention with Learnable Advisor
- URL: http://arxiv.org/abs/2508.01311v1
- Date: Sat, 02 Aug 2025 10:54:55 GMT
- Title: C3D-AD: Toward Continual 3D Anomaly Detection via Kernel Attention with Learnable Advisor
- Authors: Haoquan Lu, Hanzhe Liang, Jie Zhang, Chenxi Hu, Jinbao Wang, Can Gao,
- Abstract summary: 3D Anomaly Detection (AD) has shown great potential in detecting anomalies or defects of high-precision industrial products.<n>Existing methods are typically trained in a class-specific manner and also lack the capability of learning from emerging classes.<n>We propose Continual 3D Anomaly Detection (C3D-AD), which can not only learn generalized representations for multi-class point clouds but also handle new classes emerging over time.
- Score: 9.917394249928092
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: 3D Anomaly Detection (AD) has shown great potential in detecting anomalies or defects of high-precision industrial products. However, existing methods are typically trained in a class-specific manner and also lack the capability of learning from emerging classes. In this study, we proposed a continual learning framework named Continual 3D Anomaly Detection (C3D-AD), which can not only learn generalized representations for multi-class point clouds but also handle new classes emerging over time.Specifically, in the feature extraction module, to extract generalized local features from diverse product types of different tasks efficiently, Kernel Attention with random feature Layer (KAL) is introduced, which normalizes the feature space. Then, to reconstruct data correctly and continually, an efficient Kernel Attention with learnable Advisor (KAA) mechanism is proposed, which learns the information from new categories while discarding redundant old information within both the encoder and decoder. Finally, to keep the representation consistency over tasks, a Reconstruction with Parameter Perturbation (RPP) module is proposed by designing a representation rehearsal loss function, which ensures that the model remembers previous category information and returns category-adaptive representation.Extensive experiments on three public datasets demonstrate the effectiveness of the proposed method, achieving an average performance of 66.4%, 83.1%, and 63.4% AUROC on Real3D-AD, Anomaly-ShapeNet, and MulSen-AD, respectively.
Related papers
- MC3D-AD: A Unified Geometry-aware Reconstruction Model for Multi-category 3D Anomaly Detection [33.46875410103838]
This paper presents a novel unified model for Multi-Category 3D Anomaly Detection (MC3D-AD)<n>It aims to utilize both local and global geometry-aware information to reconstruct normal representations of all categories.<n>MC3D-AD is evaluated on two publicly available Real3D-AD and Anomaly-ShapeNet datasets.
arXiv Detail & Related papers (2025-05-04T02:38:10Z) - 3D-AffordanceLLM: Harnessing Large Language Models for Open-Vocabulary Affordance Detection in 3D Worlds [81.14476072159049]
3D Affordance detection is a challenging problem with broad applications on various robotic tasks.<n>We reformulate the traditional affordance detection paradigm into textit Reasoning Affordance (IRAS) task.<n>We propose 3D-ADLLM, a framework designed for reasoning affordance detection in 3D open-scene.
arXiv Detail & Related papers (2025-02-27T12:29:44Z) - Masked Generative Extractor for Synergistic Representation and 3D Generation of Point Clouds [6.69660410213287]
We propose an innovative framework called Point-MGE to explore the benefits of deeply integrating 3D representation learning and generative learning.
In shape classification, Point-MGE achieved an accuracy of 94.2% (+1.0%) on the ModelNet40 dataset and 92.9% (+5.5%) on the ScanObjectNN dataset.
Experimental results also confirmed that Point-MGE can generate high-quality 3D shapes in both unconditional and conditional settings.
arXiv Detail & Related papers (2024-06-25T07:57:03Z) - OV-Uni3DETR: Towards Unified Open-Vocabulary 3D Object Detection via Cycle-Modality Propagation [67.56268991234371]
OV-Uni3DETR achieves the state-of-the-art performance on various scenarios, surpassing existing methods by more than 6% on average.
Code and pre-trained models will be released later.
arXiv Detail & Related papers (2024-03-28T17:05:04Z) - FILP-3D: Enhancing 3D Few-shot Class-incremental Learning with Pre-trained Vision-Language Models [59.13757801286343]
Few-shot class-incremental learning aims to mitigate the catastrophic forgetting issue when a model is incrementally trained on limited data.<n>We introduce the FILP-3D framework with two novel components: the Redundant Feature Eliminator (RFE) for feature space misalignment and the Spatial Noise Compensator (SNC) for significant noise.
arXiv Detail & Related papers (2023-12-28T14:52:07Z) - I3DOD: Towards Incremental 3D Object Detection via Prompting [31.75287371048825]
We present a novel Incremental 3D Object Detection framework with the guidance of prompting, i.e., I3DOD.
Specifically, we propose a task-shared prompts mechanism to learn the matching relationships between the object localization information and category semantic information.
Our method outperforms the state-of-the-art object detection methods by 0.6% - 2.7% in terms of mAP@0.25.
arXiv Detail & Related papers (2023-08-24T02:54:38Z) - Class-Specific Semantic Reconstruction for Open Set Recognition [101.24781422480406]
Open set recognition enables deep neural networks (DNNs) to identify samples of unknown classes.
We propose a novel method, called Class-Specific Semantic Reconstruction (CSSR), that integrates the power of auto-encoder (AE) and prototype learning.
Results of experiments conducted on multiple datasets show that the proposed method achieves outstanding performance in both close and open set recognition.
arXiv Detail & Related papers (2022-07-05T16:25:34Z) - The Devil is in the Task: Exploiting Reciprocal Appearance-Localization
Features for Monocular 3D Object Detection [62.1185839286255]
Low-cost monocular 3D object detection plays a fundamental role in autonomous driving.
We introduce a Dynamic Feature Reflecting Network, named DFR-Net.
We rank 1st among all the monocular 3D object detectors in the KITTI test set.
arXiv Detail & Related papers (2021-12-28T07:31:18Z) - Static-Dynamic Co-Teaching for Class-Incremental 3D Object Detection [71.18882803642526]
Deep learning approaches have shown remarkable performance in the 3D object detection task.
They suffer from a catastrophic performance drop when incrementally learning new classes without revisiting the old data.
This "catastrophic forgetting" phenomenon impedes the deployment of 3D object detection approaches in real-world scenarios.
We present the first solution - SDCoT, a novel static-dynamic co-teaching method.
arXiv Detail & Related papers (2021-12-14T09:03:41Z) - SA-Det3D: Self-Attention Based Context-Aware 3D Object Detection [9.924083358178239]
We propose two variants of self-attention for contextual modeling in 3D object detection.
We first incorporate the pairwise self-attention mechanism into the current state-of-the-art BEV, voxel and point-based detectors.
Next, we propose a self-attention variant that samples a subset of the most representative features by learning deformations over randomly sampled locations.
arXiv Detail & Related papers (2021-01-07T18:30:32Z) - Improving Point Cloud Semantic Segmentation by Learning 3D Object
Detection [102.62963605429508]
Point cloud semantic segmentation plays an essential role in autonomous driving.
Current 3D semantic segmentation networks focus on convolutional architectures that perform great for well represented classes.
We propose a novel Aware 3D Semantic Detection (DASS) framework that explicitly leverages localization features from an auxiliary 3D object detection task.
arXiv Detail & Related papers (2020-09-22T14:17:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.