Two Is Better Than One: Aligned Representation Pairs for Anomaly Detection
- URL: http://arxiv.org/abs/2405.18848v3
- Date: Fri, 19 Sep 2025 12:44:16 GMT
- Title: Two Is Better Than One: Aligned Representation Pairs for Anomaly Detection
- Authors: Alain Ryser, Thomas M. Sutter, Alexander Marx, Julia E. Vogt,
- Abstract summary: Anomaly detection focuses on identifying samples that deviate from the norm.<n>Recent self-supervised methods have successfully learned such representations by employing prior knowledge about anomalies to create synthetic outliers during training.<n>We address this limitation with our new approach Con$$, which leverages prior knowledge about symmetries in normal samples to observe the data in different contexts.
- Score: 56.57122939745213
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Anomaly detection focuses on identifying samples that deviate from the norm. Discovering informative representations of normal samples is crucial to detecting anomalies effectively. Recent self-supervised methods have successfully learned such representations by employing prior knowledge about anomalies to create synthetic outliers during training. However, we often do not know what to expect from unseen data in specialized real-world applications. In this work, we address this limitation with our new approach Con$_2$, which leverages prior knowledge about symmetries in normal samples to observe the data in different contexts. Con$_2$ consists of two parts: Context Contrasting clusters representations according to their context, while Content Alignment encourages the model to capture semantic information by aligning the positions of normal samples across clusters. The resulting representation space allows us to detect anomalies as outliers of the learned context clusters. We demonstrate the benefit of this approach in extensive experiments on specialized medical datasets, outperforming competitive baselines based on self-supervised learning and pretrained models and presenting competitive performance on natural imaging benchmarks.
Related papers
- Biased Generalization in Diffusion Models [4.602851365305176]
Generalization in generative modeling is defined as the ability to learn an underlying distribution from a finite dataset and produce novel samples.<n>In practice, training is often stopped at the minimum of the test loss, taken as an operational indicator of generalization.<n>We challenge this viewpoint by identifying a phase of biased generalization during training, in which the model continues to decrease the test loss while favoring samples with anomalously high proximity to training data.
arXiv Detail & Related papers (2026-03-03T19:25:33Z) - Multi-Cue Anomaly Detection and Localization under Data Contamination [0.6703429330486276]
We propose a robust anomaly detection framework that integrates limited anomaly supervision into the adaptive deviation learning paradigm.<n>Our framework achieves strong detection and localization performance, interpretability, and robustness under various levels of data contamination.
arXiv Detail & Related papers (2026-01-30T12:34:13Z) - Correcting False Alarms from Unseen: Adapting Graph Anomaly Detectors at Test Time [60.341117019125214]
We propose a lightweight and plug-and-play Test-time adaptation framework for correcting Unseen Normal pattErns in graph anomaly detection (GAD)<n>To address semantic confusion, a graph aligner is employed to align the shifted data to the original one at the graph attribute level.<n>Extensive experiments on 10 real-world datasets demonstrate that TUNE significantly enhances the generalizability of pre-trained GAD models to both synthetic and real unseen normal patterns.
arXiv Detail & Related papers (2025-11-10T12:10:05Z) - Enhancing Anomaly Detection via Generating Diversified and Hard-to-distinguish Synthetic Anomalies [7.021105583098609]
Recent approaches have focused on leveraging domain-specific transformations or perturbations to generate synthetic anomalies from normal samples.
We introduce a novel domain-agnostic method that employs a set of conditional perturbators and a discriminator.
We demonstrate the superiority of our method over state-of-the-art benchmarks.
arXiv Detail & Related papers (2024-09-16T08:15:23Z) - Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - Memory Consistency Guided Divide-and-Conquer Learning for Generalized
Category Discovery [56.172872410834664]
Generalized category discovery (GCD) aims at addressing a more realistic and challenging setting of semi-supervised learning.
We propose a Memory Consistency guided Divide-and-conquer Learning framework (MCDL)
Our method outperforms state-of-the-art models by a large margin on both seen and unseen classes of the generic image recognition.
arXiv Detail & Related papers (2024-01-24T09:39:45Z) - SaliencyCut: Augmenting Plausible Anomalies for Anomaly Detection [24.43321988051129]
We propose a novel saliency-guided data augmentation method, SaliencyCut, to produce pseudo but more common anomalies.
We then design a novel patch-wise residual module in the anomaly learning head to extract and assess the fine-grained anomaly features from each sample.
arXiv Detail & Related papers (2023-06-14T08:55:36Z) - CRADL: Contrastive Representations for Unsupervised Anomaly Detection
and Localization [2.8659934481869715]
Unsupervised anomaly detection in medical imaging aims to detect and localize arbitrary anomalies without requiring anomalous data during training.
Most current state-of-the-art methods use latent variable generative models operating directly on the images.
We propose CRADL whose core idea is to model the distribution of normal samples directly in the low-dimensional representation space of an encoder trained with a contrastive pretext-task.
arXiv Detail & Related papers (2023-01-05T16:07:49Z) - Hierarchical Semi-Supervised Contrastive Learning for
Contamination-Resistant Anomaly Detection [81.07346419422605]
Anomaly detection aims at identifying deviant samples from the normal data distribution.
Contrastive learning has provided a successful way to sample representation that enables effective discrimination on anomalies.
We propose a novel hierarchical semi-supervised contrastive learning framework, for contamination-resistant anomaly detection.
arXiv Detail & Related papers (2022-07-24T18:49:26Z) - Augment to Detect Anomalies with Continuous Labelling [10.646747658653785]
Anomaly detection is to recognize samples that differ in some respect from the training observations.
Recent state-of-the-art deep learning-based anomaly detection methods suffer from high computational cost, complexity, unstable training procedures, and non-trivial implementation.
We leverage a simple learning procedure that trains a lightweight convolutional neural network, reaching state-of-the-art performance in anomaly detection.
arXiv Detail & Related papers (2022-07-03T20:11:51Z) - Catching Both Gray and Black Swans: Open-set Supervised Anomaly
Detection [90.32910087103744]
A few labeled anomaly examples are often available in many real-world applications.
These anomaly examples provide valuable knowledge about the application-specific abnormality.
Those anomalies seen during training often do not illustrate every possible class of anomaly.
This paper tackles open-set supervised anomaly detection.
arXiv Detail & Related papers (2022-03-28T05:21:37Z) - Few-shot Deep Representation Learning based on Information Bottleneck
Principle [0.0]
In a standard anomaly detection problem, a detection model is trained in an unsupervised setting, under an assumption that the samples were generated from a single source of normal data.
In practice, normal data often consist of multiple classes. In such settings, learning to differentiate between normal instances and anomalies among discrepancies between normal classes without large-scale labeled data presents a significant challenge.
In this work, we attempt to overcome this challenge by preparing few examples from each normal class, which is not excessively costly.
arXiv Detail & Related papers (2021-11-25T07:15:12Z) - SLA$^2$P: Self-supervised Anomaly Detection with Adversarial
Perturbation [77.71161225100927]
Anomaly detection is a fundamental yet challenging problem in machine learning.
We propose a novel and powerful framework, dubbed as SLA$2$P, for unsupervised anomaly detection.
arXiv Detail & Related papers (2021-11-25T03:53:43Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Toward Deep Supervised Anomaly Detection: Reinforcement Learning from
Partially Labeled Anomaly Data [150.9270911031327]
We consider the problem of anomaly detection with a small set of partially labeled anomaly examples and a large-scale unlabeled dataset.
Existing related methods either exclusively fit the limited anomaly examples that typically do not span the entire set of anomalies, or proceed with unsupervised learning from the unlabeled data.
We propose here instead a deep reinforcement learning-based approach that enables an end-to-end optimization of the detection of both labeled and unlabeled anomalies.
arXiv Detail & Related papers (2020-09-15T03:05:39Z) - Deep Weakly-supervised Anomaly Detection [118.55172352231381]
Pairwise Relation prediction Network (PReNet) learns pairwise relation features and anomaly scores.
PReNet can detect any seen/unseen abnormalities that fit the learned pairwise abnormal patterns.
Empirical results on 12 real-world datasets show that PReNet significantly outperforms nine competing methods in detecting seen and unseen anomalies.
arXiv Detail & Related papers (2019-10-30T00:40:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.