Generalized Out-of-Distribution Detection: A Survey
- URL: http://arxiv.org/abs/2110.11334v3
- Date: Tue, 23 Jan 2024 07:36:33 GMT
- Title: Generalized Out-of-Distribution Detection: A Survey
- Authors: Jingkang Yang, Kaiyang Zhou, Yixuan Li, Ziwei Liu
- Abstract summary: Out-of-distribution (OOD) detection is critical to ensuring the reliability and safety of machine learning systems.
Several other problems, including anomaly detection (AD), novelty detection (ND), open set recognition (OSR), and outlier detection (OD) are closely related to OOD detection.
We first present a unified framework called generalized OOD detection, which encompasses the five aforementioned problems.
- Score: 83.0449593806175
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection is critical to ensuring the reliability
and safety of machine learning systems. For instance, in autonomous driving, we
would like the driving system to issue an alert and hand over the control to
humans when it detects unusual scenes or objects that it has never seen during
training time and cannot make a safe decision. The term, OOD detection, first
emerged in 2017 and since then has received increasing attention from the
research community, leading to a plethora of methods developed, ranging from
classification-based to density-based to distance-based ones. Meanwhile,
several other problems, including anomaly detection (AD), novelty detection
(ND), open set recognition (OSR), and outlier detection (OD), are closely
related to OOD detection in terms of motivation and methodology. Despite common
goals, these topics develop in isolation, and their subtle differences in
definition and problem setting often confuse readers and practitioners. In this
survey, we first present a unified framework called generalized OOD detection,
which encompasses the five aforementioned problems, i.e., AD, ND, OSR, OOD
detection, and OD. Under our framework, these five problems can be seen as
special cases or sub-tasks, and are easier to distinguish. We then review each
of these five areas by summarizing their recent technical developments, with a
special focus on OOD detection methodologies. We conclude this survey with open
challenges and potential research directions.
Related papers
- Unifying Unsupervised Graph-Level Anomaly Detection and Out-of-Distribution Detection: A Benchmark [73.58840254552656]
Unsupervised graph-level anomaly detection (GLAD) and unsupervised graph-level out-of-distribution (OOD) detection have received significant attention in recent years.
We present a Unified Benchmark for unsupervised Graph-level OOD and anomaly Detection (our method)
Our benchmark encompasses 35 datasets spanning four practical anomaly and OOD detection scenarios.
We conduct multi-dimensional analyses to explore the effectiveness, generalizability, robustness, and efficiency of existing methods.
arXiv Detail & Related papers (2024-06-21T04:07:43Z) - Rethinking Out-of-Distribution Detection for Reinforcement Learning: Advancing Methods for Evaluation and Detection [3.7384109981836158]
We study the problem of out-of-distribution (OOD) detection in reinforcement learning (RL)
We propose a clarification of terminology for OOD detection in RL, which aligns it with the literature from other machine learning domains.
We present new benchmark scenarios for OOD detection, which introduce anomalies with temporal autocorrelation into different components of the agent-environment loop.
We find that DEXTER can reliably identify anomalies across benchmark scenarios, exhibiting superior performance compared to both state-of-the-art OOD detectors and high-dimensional changepoint detectors adopted from statistics.
arXiv Detail & Related papers (2024-04-10T15:39:49Z) - Out-of-Distribution Data: An Acquaintance of Adversarial Examples -- A Survey [7.891552999555933]
Deep neural networks (DNNs) deployed in real-world applications can encounter out-of-distribution (OOD) data and adversarial examples.
Traditionally, research has addressed OOD detection and adversarial robustness as separate challenges.
This survey focuses on the intersection of these two areas, examining how the research community has investigated them together.
arXiv Detail & Related papers (2024-04-08T06:27:38Z) - Expecting The Unexpected: Towards Broad Out-Of-Distribution Detection [9.656342063882555]
We study five types of distribution shifts and evaluate the performance of recent OOD detection methods on each of them.
Our findings reveal that while these methods excel in detecting unknown classes, their performance is inconsistent when encountering other types of distribution shifts.
We present an ensemble approach that offers a more consistent and comprehensive solution for broad OOD detection.
arXiv Detail & Related papers (2023-08-22T14:52:44Z) - Beyond AUROC & co. for evaluating out-of-distribution detection
performance [50.88341818412508]
Given their relevance for safe(r) AI, it is important to examine whether the basis for comparing OOD detection methods is consistent with practical needs.
We propose a new metric - Area Under the Threshold Curve (AUTC), which explicitly penalizes poor separation between ID and OOD samples.
arXiv Detail & Related papers (2023-06-26T12:51:32Z) - OpenOOD v1.5: Enhanced Benchmark for Out-of-Distribution Detection [81.25718226042832]
Out-of-Distribution (OOD) detection is critical for the reliable operation of open-world intelligent systems.
This paper presents OpenOOD v1.5, a significant improvement from its predecessor that ensures accurate, standardized, and user-friendly evaluation of OOD detection methodologies.
arXiv Detail & Related papers (2023-06-15T17:28:00Z) - OpenOOD: Benchmarking Generalized Out-of-Distribution Detection [60.13300701826931]
Out-of-distribution (OOD) detection is vital to safety-critical machine learning applications.
The field currently lacks a unified, strictly formulated, and comprehensive benchmark.
We build a unified, well-structured called OpenOOD, which implements over 30 methods developed in relevant fields.
arXiv Detail & Related papers (2022-10-13T17:59:57Z) - Detecting OODs as datapoints with High Uncertainty [12.040347694782007]
Deep neural networks (DNNs) are known to produce incorrect predictions with very high confidence on out-of-distribution inputs (OODs)
This limitation is one of the key challenges in the adoption of DNNs in high-assurance systems such as autonomous driving, air traffic management, and medical diagnosis.
Several techniques have been developed to detect inputs where the model's prediction cannot be trusted.
We demonstrate the difference in the detection ability of these techniques and propose an ensemble approach for detection of OODs as datapoints with high uncertainty (epistemic or aleatoric)
arXiv Detail & Related papers (2021-08-13T20:07:42Z) - Survey of Network Intrusion Detection Methods from the Perspective of
the Knowledge Discovery in Databases Process [63.75363908696257]
We review the methods that have been applied to network data with the purpose of developing an intrusion detector.
We discuss the techniques used for the capture, preparation and transformation of the data, as well as, the data mining and evaluation methods.
As a result of this literature review, we investigate some open issues which will need to be considered for further research in the area of network security.
arXiv Detail & Related papers (2020-01-27T11:21:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.