A Discrepancy Aware Framework for Robust Anomaly Detection
- URL: http://arxiv.org/abs/2310.07585v1
- Date: Wed, 11 Oct 2023 15:21:40 GMT
- Title: A Discrepancy Aware Framework for Robust Anomaly Detection
- Authors: Yuxuan Cai, Dingkang Liang, Dongliang Luo, Xinwei He, Xin Yang, Xiang
Bai
- Abstract summary: We present a Discrepancy Aware Framework (DAF), which demonstrates robust performance consistently with simple and cheap strategies.
Our method leverages an appearance-agnostic cue to guide the decoder in identifying defects, thereby alleviating its reliance on synthetic appearance.
Under the simple synthesis strategies, it outperforms existing methods by a large margin. Furthermore, it also achieves the state-of-the-art localization performance.
- Score: 51.710249807397695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Defect detection is a critical research area in artificial intelligence.
Recently, synthetic data-based self-supervised learning has shown great
potential on this task. Although many sophisticated synthesizing strategies
exist, little research has been done to investigate the robustness of models
when faced with different strategies. In this paper, we focus on this issue and
find that existing methods are highly sensitive to them. To alleviate this
issue, we present a Discrepancy Aware Framework (DAF), which demonstrates
robust performance consistently with simple and cheap strategies across
different anomaly detection benchmarks. We hypothesize that the high
sensitivity to synthetic data of existing self-supervised methods arises from
their heavy reliance on the visual appearance of synthetic data during
decoding. In contrast, our method leverages an appearance-agnostic cue to guide
the decoder in identifying defects, thereby alleviating its reliance on
synthetic appearance. To this end, inspired by existing knowledge distillation
methods, we employ a teacher-student network, which is trained based on
synthesized outliers, to compute the discrepancy map as the cue. Extensive
experiments on two challenging datasets prove the robustness of our method.
Under the simple synthesis strategies, it outperforms existing methods by a
large margin. Furthermore, it also achieves the state-of-the-art localization
performance. Code is available at: https://github.com/caiyuxuan1120/DAF.
Related papers
- Synthetic Image Learning: Preserving Performance and Preventing Membership Inference Attacks [5.0243930429558885]
This paper introduces Knowledge Recycling (KR), a pipeline designed to optimise the generation and use of synthetic data for training downstream classifiers.
At the heart of this pipeline is Generative Knowledge Distillation (GKD), the proposed technique that significantly improves the quality and usefulness of the information.
The results show a significant reduction in the performance gap between models trained on real and synthetic data, with models based on synthetic data outperforming those trained on real data in some cases.
arXiv Detail & Related papers (2024-07-22T10:31:07Z) - Unveiling the Flaws: Exploring Imperfections in Synthetic Data and Mitigation Strategies for Large Language Models [89.88010750772413]
Synthetic data has been proposed as a solution to address the issue of high-quality data scarcity in the training of large language models (LLMs)
Our work delves into these specific flaws associated with question-answer (Q-A) pairs, a prevalent type of synthetic data, and presents a method based on unlearning techniques to mitigate these flaws.
Our work has yielded key insights into the effective use of synthetic data, aiming to promote more robust and efficient LLM training.
arXiv Detail & Related papers (2024-06-18T08:38:59Z) - Curating Naturally Adversarial Datasets for Learning-Enabled Medical
Cyber-Physical Systems [5.349773727704873]
Existing research focuses on robustness to synthetic adversarial examples, crafted by adding imperceptible perturbations to clean input data.
We propose a method to curate datasets comprised of natural adversarial examples to evaluate model robustness.
arXiv Detail & Related papers (2023-09-01T15:52:32Z) - Autoencoder-based Anomaly Detection in Streaming Data with Incremental
Learning and Concept Drift Adaptation [10.41066461952124]
The paper proposes an autoencoder-based incremental learning method with drift detection (strAEm++DD)
Our proposed method strAEm++DD leverages on the advantages of both incremental learning and drift detection.
We conduct an experimental study using real-world and synthetic datasets with severe or extreme class imbalance, and provide an empirical analysis of strAEm++DD.
arXiv Detail & Related papers (2023-05-15T19:40:04Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - CAFE: Learning to Condense Dataset by Aligning Features [72.99394941348757]
We propose a novel scheme to Condense dataset by Aligning FEatures (CAFE)
At the heart of our approach is an effective strategy to align features from the real and synthetic data across various scales.
We validate the proposed CAFE across various datasets, and demonstrate that it generally outperforms the state of the art.
arXiv Detail & Related papers (2022-03-03T05:58:49Z) - Unsupervised Domain Adaptive Salient Object Detection Through
Uncertainty-Aware Pseudo-Label Learning [104.00026716576546]
We propose to learn saliency from synthetic but clean labels, which naturally has higher pixel-labeling quality without the effort of manual annotations.
We show that our proposed method outperforms the existing state-of-the-art deep unsupervised SOD methods on several benchmark datasets.
arXiv Detail & Related papers (2022-02-26T16:03:55Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - A Compact Deep Learning Model for Face Spoofing Detection [4.250231861415827]
presentation attack detection (PAD) has received significant attention from research communities.
We address the problem via fusing both wide and deep features in a unified neural architecture.
The procedure is done on different spoofing datasets such as ROSE-Youtu, SiW and NUAA Imposter.
arXiv Detail & Related papers (2021-01-12T21:20:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.