Anomaly Detection with Domain Adaptation
- URL: http://arxiv.org/abs/2006.03689v1
- Date: Fri, 5 Jun 2020 21:05:19 GMT
- Title: Anomaly Detection with Domain Adaptation
- Authors: Ziyi Yang, Iman Soltani Bozchalooi, Eric Darve
- Abstract summary: We propose the Invariant Representation Anomaly Detection (IRAD) to solve this problem.
The extraction is achieved by an across-domain encoder trained together with source-specific encoders and generators by adversarial learning.
We evaluate IRAD extensively on digits images datasets (MNIST, USPS and SVHN) and object recognition datasets (Office-Home)
- Score: 5.457279006229213
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of semi-supervised anomaly detection with domain
adaptation. Given a set of normal data from a source domain and a limited
amount of normal examples from a target domain, the goal is to have a
well-performing anomaly detector in the target domain. We propose the Invariant
Representation Anomaly Detection (IRAD) to solve this problem where we first
learn to extract a domain-invariant representation. The extraction is achieved
by an across-domain encoder trained together with source-specific encoders and
generators by adversarial learning. An anomaly detector is then trained using
the learnt representations. We evaluate IRAD extensively on digits images
datasets (MNIST, USPS and SVHN) and object recognition datasets (Office-Home).
Experimental results show that IRAD outperforms baseline models by a wide
margin across different datasets. We derive a theoretical lower bound for the
joint error that explains the performance decay from overtraining and also an
upper bound for the generalization error.
Related papers
- GeneralAD: Anomaly Detection Across Domains by Attending to Distorted Features [68.14842693208465]
GeneralAD is an anomaly detection framework designed to operate in semantic, near-distribution, and industrial settings.
We propose a novel self-supervised anomaly generation module that employs straightforward operations like noise addition and shuffling to patch features.
We extensively evaluated our approach on ten datasets, achieving state-of-the-art results in six and on-par performance in the remaining.
arXiv Detail & Related papers (2024-07-17T09:27:41Z) - ARC: A Generalist Graph Anomaly Detector with In-Context Learning [62.202323209244]
ARC is a generalist GAD approach that enables a one-for-all'' GAD model to detect anomalies across various graph datasets on-the-fly.
equipped with in-context learning, ARC can directly extract dataset-specific patterns from the target dataset.
Extensive experiments on multiple benchmark datasets from various domains demonstrate the superior anomaly detection performance, efficiency, and generalizability of ARC.
arXiv Detail & Related papers (2024-05-27T02:42:33Z) - DACAD: Domain Adaptation Contrastive Learning for Anomaly Detection in Multivariate Time Series [25.434379659643707]
In time series anomaly detection, the scarcity of labeled data poses a challenge to the development of accurate models.
We propose a novel Domain Contrastive learning model for Anomaly Detection in time series (DACAD)
Our model employs supervised contrastive loss for the source domain and self-supervised contrastive triplet loss for the target domain.
arXiv Detail & Related papers (2024-04-17T11:20:14Z) - Toward Generalist Anomaly Detection via In-context Residual Learning with Few-shot Sample Prompts [25.629973843455495]
Generalist Anomaly Detection (GAD) aims to train one single detection model that can generalize to detect anomalies in diverse datasets from different application domains without further training on the target data.
We introduce a novel approach that learns an in-context residual learning model for GAD, termed InCTRL.
InCTRL is the best performer and significantly outperforms state-of-the-art competing methods.
arXiv Detail & Related papers (2024-03-11T08:07:46Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - Time-series Anomaly Detection via Contextual Discriminative Contrastive
Learning [0.0]
One-class classification methods are commonly used for anomaly detection tasks.
We propose a novel approach inspired by the loss function of DeepSVDD.
We combine our approach with a deterministic contrastive loss from Neutral AD, a promising self-supervised learning anomaly detection approach.
arXiv Detail & Related papers (2023-04-16T21:36:19Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - DASVDD: Deep Autoencoding Support Vector Data Descriptor for Anomaly
Detection [9.19194451963411]
Semi-supervised anomaly detection aims to detect anomalies from normal samples using a model that is trained on normal data.
We propose a method, DASVDD, that jointly learns the parameters of an autoencoder while minimizing the volume of an enclosing hyper-sphere on its latent representation.
arXiv Detail & Related papers (2021-06-09T21:57:41Z) - Learning Domain-invariant Graph for Adaptive Semi-supervised Domain
Adaptation with Few Labeled Source Samples [65.55521019202557]
Domain adaptation aims to generalize a model from a source domain to tackle tasks in a related but different target domain.
Traditional domain adaptation algorithms assume that enough labeled data, which are treated as the prior knowledge are available in the source domain.
We propose a Domain-invariant Graph Learning (DGL) approach for domain adaptation with only a few labeled source samples.
arXiv Detail & Related papers (2020-08-21T08:13:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.