Deep Anomaly Detection and Search via Reinforcement Learning
- URL: http://arxiv.org/abs/2208.14834v1
- Date: Wed, 31 Aug 2022 13:03:33 GMT
- Title: Deep Anomaly Detection and Search via Reinforcement Learning
- Authors: Chao Chen, Dawei Wang, Feng Mao, Zongzhang Zhang, Yang Yu
- Abstract summary: We propose Deep Anomaly Detection and Search (DADS) to balance exploitation and exploration.
During the training process, DADS searches for possible anomalies with hierarchically-structured datasets.
Results show that DADS can efficiently and precisely search anomalies from unlabeled data and learn from them.
- Score: 22.005663849044772
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-supervised Anomaly Detection (AD) is a kind of data mining task which
aims at learning features from partially-labeled datasets to help detect
outliers. In this paper, we classify existing semi-supervised AD methods into
two categories: unsupervised-based and supervised-based, and point out that
most of them suffer from insufficient exploitation of labeled data and
under-exploration of unlabeled data. To tackle these problems, we propose Deep
Anomaly Detection and Search (DADS), which applies Reinforcement Learning (RL)
to balance exploitation and exploration. During the training process, the agent
searches for possible anomalies with hierarchically-structured datasets and
uses the searched anomalies to enhance performance, which in essence draws
lessons from the idea of ensemble learning. Experimentally, we compare DADS
with several state-of-the-art methods in the settings of leveraging labeled
known anomalies to detect both other known anomalies and unknown anomalies.
Results show that DADS can efficiently and precisely search anomalies from
unlabeled data and learn from them, thus achieving good performance.
Related papers
- Collaborative Feature-Logits Contrastive Learning for Open-Set Semi-Supervised Object Detection [75.02249869573994]
In open-set scenarios, the unlabeled dataset contains both in-distribution (ID) classes and out-of-distribution (OOD) classes.
Applying semi-supervised detectors in such settings can lead to misclassifying OOD class as ID classes.
We propose a simple yet effective method, termed Collaborative Feature-Logits Detector (CFL-Detector)
arXiv Detail & Related papers (2024-11-20T02:57:35Z) - Anomaly Detection of Tabular Data Using LLMs [54.470648484612866]
We show that pre-trained large language models (LLMs) are zero-shot batch-level anomaly detectors.
We propose an end-to-end fine-tuning strategy to bring out the potential of LLMs in detecting real anomalies.
arXiv Detail & Related papers (2024-06-24T04:17:03Z) - Self-Supervised Time-Series Anomaly Detection Using Learnable Data Augmentation [37.72735288760648]
We propose a learnable data augmentation-based time-series anomaly detection (LATAD) technique that is trained in a self-supervised manner.
LATAD extracts discriminative features from time-series data through contrastive learning.
As per the results, LATAD exhibited comparable or improved performance to the state-of-the-art anomaly detection assessments.
arXiv Detail & Related papers (2024-06-18T04:25:56Z) - Weakly Supervised Anomaly Detection via Knowledge-Data Alignment [24.125871437370357]
Anomaly detection plays a pivotal role in numerous web-based applications, including malware detection, anti-money laundering, device failure detection, and network fault analysis.
Weakly Supervised Anomaly Detection (WSAD) has been introduced with a limited number of labeled anomaly samples to enhance model performance.
We introduce a novel framework Knowledge-Data Alignment (KDAlign) to integrate rule knowledge, typically summarized by human experts, to supplement the limited labeled data.
arXiv Detail & Related papers (2024-02-06T07:57:13Z) - Active anomaly detection based on deep one-class classification [9.904380236739398]
We tackle two essential problems of active learning for Deep SVDD: query strategy and semi-supervised learning method.
First, rather than solely identifying anomalies, our query strategy selects uncertain samples according to an adaptive boundary.
Second, we apply noise contrastive estimation in training a one-class classification model to incorporate both labeled normal and abnormal data effectively.
arXiv Detail & Related papers (2023-09-18T03:56:45Z) - Weakly Supervised Anomaly Detection: A Survey [75.26180038443462]
Anomaly detection (AD) is a crucial task in machine learning with various applications.
We present the first comprehensive survey of weakly supervised anomaly detection (WSAD) methods.
For each setting, we provide formal definitions, key algorithms, and potential future directions.
arXiv Detail & Related papers (2023-02-09T10:27:21Z) - Data-Efficient and Interpretable Tabular Anomaly Detection [54.15249463477813]
We propose a novel framework that adapts a white-box model class, Generalized Additive Models, to detect anomalies.
In addition, the proposed framework, DIAD, can incorporate a small amount of labeled data to further boost anomaly detection performances in semi-supervised settings.
arXiv Detail & Related papers (2022-03-03T22:02:56Z) - A Taxonomy of Anomalies in Log Data [0.09558392439655014]
A common taxonomy for anomalies already exists, but it has not yet been applied specifically to log data.
We present a taxonomy for different kinds of log data anomalies and introduce a method for analyzing such anomalies in labeled datasets.
Our results show, that the most common anomaly type is also the easiest to predict.
arXiv Detail & Related papers (2021-11-26T12:23:06Z) - Self-Trained One-class Classification for Unsupervised Anomaly Detection [56.35424872736276]
Anomaly detection (AD) has various applications across domains, from manufacturing to healthcare.
In this work, we focus on unsupervised AD problems whose entire training data are unlabeled and may contain both normal and anomalous samples.
To tackle this problem, we build a robust one-class classification framework via data refinement.
We show that our method outperforms state-of-the-art one-class classification method by 6.3 AUC and 12.5 average precision.
arXiv Detail & Related papers (2021-06-11T01:36:08Z) - Toward Deep Supervised Anomaly Detection: Reinforcement Learning from
Partially Labeled Anomaly Data [150.9270911031327]
We consider the problem of anomaly detection with a small set of partially labeled anomaly examples and a large-scale unlabeled dataset.
Existing related methods either exclusively fit the limited anomaly examples that typically do not span the entire set of anomalies, or proceed with unsupervised learning from the unlabeled data.
We propose here instead a deep reinforcement learning-based approach that enables an end-to-end optimization of the detection of both labeled and unlabeled anomalies.
arXiv Detail & Related papers (2020-09-15T03:05:39Z) - Effectiveness of Tree-based Ensembles for Anomaly Discovery: Insights, Batch and Streaming Active Learning [18.49217234413188]
This paper makes four main contributions to improve the state-of-the-art in anomaly discovery using tree-based ensembles.
We develop a novel batch active learning algorithm to improve the diversity of discovered anomalies.
We present a data drift detection algorithm that not only detects the drift robustly, but also allows us to take corrective actions to adapt the anomaly detector in a principled manner.
arXiv Detail & Related papers (2019-01-23T23:41:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.