Mitigating Long-Tailed Anomaly Score Distributions with Importance-Weighted Loss
- URL: http://arxiv.org/abs/2601.02440v1
- Date: Mon, 05 Jan 2026 10:02:09 GMT
- Title: Mitigating Long-Tailed Anomaly Score Distributions with Importance-Weighted Loss
- Authors: Jungi Lee, Jungkwon Kim, Chi Zhang, Sangmin Kim, Kwangsun Yoo, Seok-Joo Byun,
- Abstract summary: Anomaly detection is crucial in industrial applications for identifying rare and unseen patterns to ensure system reliability.<n>Traditional models, trained on a single class of normal data, struggle with real-world distributions where normal data exhibit diverse patterns.<n>We propose a novel importance-weighted loss designed specifically for anomaly detection.<n>Our method improves anomaly detection performance by 0.043, highlighting its effectiveness in real-world applications.
- Score: 7.364074727181891
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Anomaly detection is crucial in industrial applications for identifying rare and unseen patterns to ensure system reliability. Traditional models, trained on a single class of normal data, struggle with real-world distributions where normal data exhibit diverse patterns, leading to class imbalance and long-tailed anomaly score distributions (LTD). This imbalance skews model training and degrades detection performance, especially for minority instances. To address this issue, we propose a novel importance-weighted loss designed specifically for anomaly detection. Compared to the previous method for LTD in classification, our method does not require prior knowledge of normal data classes. Instead, we introduce a weighted loss function that incorporates importance sampling to align the distribution of anomaly scores with a target Gaussian, ensuring a balanced representation of normal data. Extensive experiments on three benchmark image datasets and three real-world hyperspectral imaging datasets demonstrate the robustness of our approach in mitigating LTD-induced bias. Our method improves anomaly detection performance by 0.043, highlighting its effectiveness in real-world applications.
Related papers
- Multi-Cue Anomaly Detection and Localization under Data Contamination [0.6703429330486276]
We propose a robust anomaly detection framework that integrates limited anomaly supervision into the adaptive deviation learning paradigm.<n>Our framework achieves strong detection and localization performance, interpretability, and robustness under various levels of data contamination.
arXiv Detail & Related papers (2026-01-30T12:34:13Z) - Correcting False Alarms from Unseen: Adapting Graph Anomaly Detectors at Test Time [60.341117019125214]
We propose a lightweight and plug-and-play Test-time adaptation framework for correcting Unseen Normal pattErns in graph anomaly detection (GAD)<n>To address semantic confusion, a graph aligner is employed to align the shifted data to the original one at the graph attribute level.<n>Extensive experiments on 10 real-world datasets demonstrate that TUNE significantly enhances the generalizability of pre-trained GAD models to both synthetic and real unseen normal patterns.
arXiv Detail & Related papers (2025-11-10T12:10:05Z) - Leveraging Learning Bias for Noisy Anomaly Detection [19.23861148116995]
This paper addresses the challenge of fully unsupervised image anomaly detection (FUIAD)<n> Conventional methods assume anomaly-free training data, but real-world contamination leads models to absorb anomalies as normal.<n>We propose a two-stage framework that exploits inherent learning bias in models.
arXiv Detail & Related papers (2025-08-10T17:47:21Z) - Robust Distribution Alignment for Industrial Anomaly Detection under Distribution Shift [51.24522135151649]
Anomaly detection plays a crucial role in quality control for industrial applications.<n>Existing methods attempt to address domain shifts by training generalizable models.<n>Our proposed method demonstrates superior results compared with state-of-the-art anomaly detection and domain adaptation methods.
arXiv Detail & Related papers (2025-03-19T05:25:52Z) - Towards Zero-shot 3D Anomaly Localization [58.62650061201283]
3DzAL is a novel patch-level contrastive learning framework for 3D anomaly detection and localization.<n>We show that 3DzAL outperforms the state-of-the-art anomaly detection and localization performance.
arXiv Detail & Related papers (2024-12-05T16:25:27Z) - Adaptive Deviation Learning for Visual Anomaly Detection with Data Contamination [20.4008901760593]
We introduce a systematic adaptive method that employs deviation learning to compute anomaly scores end-to-end.
Our proposed method surpasses competing techniques and exhibits both stability and robustness in the presence of data contamination.
arXiv Detail & Related papers (2024-11-14T16:10:15Z) - GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection [60.78684630040313]
Diffusion models tend to reconstruct normal counterparts of test images with certain noises added.
From the global perspective, the difficulty of reconstructing images with different anomalies is uneven.
We propose a global and local adaptive diffusion model (abbreviated to GLAD) for unsupervised anomaly detection.
arXiv Detail & Related papers (2024-06-11T17:27:23Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - An Iterative Method for Unsupervised Robust Anomaly Detection Under Data
Contamination [24.74938110451834]
Most deep anomaly detection models are based on learning normality from datasets.
In practice, the normality assumption is often violated due to the nature of real data distributions.
We propose a learning framework to reduce this gap and achieve better normality representation.
arXiv Detail & Related papers (2023-09-18T02:36:19Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.