Background Matters: Enhancing Out-of-distribution Detection with Domain
Features
- URL: http://arxiv.org/abs/2303.08727v1
- Date: Wed, 15 Mar 2023 16:12:14 GMT
- Title: Background Matters: Enhancing Out-of-distribution Detection with Domain
Features
- Authors: Choubo Ding, Guansong Pang, Chunhua Shen
- Abstract summary: OOD samples can be drawn from arbitrary distributions and exhibit deviations from in-distribution (ID) data in various dimensions.
Existing methods focus on detecting OOD samples based on the semantic features, while neglecting the other dimensions such as the domain features.
This paper proposes a novel generic framework that can learn the domain features from the ID training samples by a dense prediction approach.
- Score: 90.32910087103744
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Detecting out-of-distribution (OOD) inputs is a principal task for ensuring
the safety of deploying deep-neural-network classifiers in open-world
scenarios. OOD samples can be drawn from arbitrary distributions and exhibit
deviations from in-distribution (ID) data in various dimensions, such as
foreground semantic features (e.g., vehicle images vs. ID samples in fruit
classification) and background domain features (e.g., textural images vs. ID
samples in object recognition). Existing methods focus on detecting OOD samples
based on the semantic features, while neglecting the other dimensions such as
the domain features. This paper considers the importance of the domain features
in OOD detection and proposes to leverage them to enhance the
semantic-feature-based OOD detection methods. To this end, we propose a novel
generic framework that can learn the domain features from the ID training
samples by a dense prediction approach, with which different existing
semantic-feature-based OOD detection methods can be seamlessly combined to
jointly learn the in-distribution features from both the semantic and domain
dimensions. Extensive experiments show that our approach 1) can substantially
enhance the performance of four different state-of-the-art (SotA) OOD detection
methods on multiple widely-used OOD datasets with diverse domain features, and
2) achieves new SotA performance on these benchmarks.
Related papers
- Rethinking the Evaluation of Out-of-Distribution Detection: A Sorites Paradox [70.57120710151105]
Most existing out-of-distribution (OOD) detection benchmarks classify samples with novel labels as the OOD data.
Some marginal OOD samples actually have close semantic contents to the in-distribution (ID) sample, which makes determining the OOD sample a Sorites Paradox.
We construct a benchmark named Incremental Shift OOD (IS-OOD) to address the issue.
arXiv Detail & Related papers (2024-06-14T09:27:56Z) - Toward a Realistic Benchmark for Out-of-Distribution Detection [3.8038269045375515]
We introduce a comprehensive benchmark for OOD detection based on ImageNet and Places365.
Several techniques can be used to determine which classes should be considered in-distribution, yielding benchmarks with varying properties.
arXiv Detail & Related papers (2024-04-16T11:29:43Z) - Out-of-Distribution Detection Using Peer-Class Generated by Large Language Model [0.0]
Out-of-distribution (OOD) detection is a critical task to ensure the reliability and security of machine learning models.
In this paper, a novel method called ODPC is proposed, in which specific prompts to generate OOD peer classes of ID semantics are designed by a large language model.
Experiments on five benchmark datasets show that the method we propose can yield state-of-the-art results.
arXiv Detail & Related papers (2024-03-20T06:04:05Z) - From Global to Local: Multi-scale Out-of-distribution Detection [129.37607313927458]
Out-of-distribution (OOD) detection aims to detect "unknown" data whose labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD detection.
We propose Multi-scale OOD DEtection (MODE), a first framework leveraging both global visual information and local region details.
arXiv Detail & Related papers (2023-08-20T11:56:25Z) - General-Purpose Multi-Modal OOD Detection Framework [5.287829685181842]
Out-of-distribution (OOD) detection identifies test samples that differ from the training data, which is critical to ensuring the safety and reliability of machine learning (ML) systems.
We propose a general-purpose weakly-supervised OOD detection framework, called WOOD, that combines a binary classifier and a contrastive learning component.
We evaluate the proposed WOOD model on multiple real-world datasets, and the experimental results demonstrate that the WOOD model outperforms the state-of-the-art methods for multi-modal OOD detection.
arXiv Detail & Related papers (2023-07-24T18:50:49Z) - A Functional Data Perspective and Baseline On Multi-Layer
Out-of-Distribution Detection [30.499548939422194]
Methods that explore the multiple layers either require a special architecture or a supervised objective to do so.
This work adopts an original approach based on a functional view of the network that exploits the sample's trajectories through the various layers and their statistical dependencies.
We validate our method and empirically demonstrate its effectiveness in OOD detection compared to strong state-of-the-art baselines on computer vision benchmarks.
arXiv Detail & Related papers (2023-06-06T09:14:05Z) - YolOOD: Utilizing Object Detection Concepts for Multi-Label
Out-of-Distribution Detection [25.68925703896601]
YolOOD is a method that utilizes concepts from the object detection domain to perform OOD detection in the multi-label classification task.
We compare our approach to state-of-the-art OOD detection methods and demonstrate YolOOD's ability to outperform these methods on a comprehensive suite of in-distribution and OOD benchmark datasets.
arXiv Detail & Related papers (2022-12-05T07:52:08Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z) - Adversarial Dual Distinct Classifiers for Unsupervised Domain Adaptation [67.83872616307008]
Unversarial Domain adaptation (UDA) attempts to recognize the unlabeled target samples by building a learning model from a differently-distributed labeled source domain.
In this paper, we propose a novel Adrial Dual Distincts Network (AD$2$CN) to align the source and target domain data distribution simultaneously with matching task-specific category boundaries.
To be specific, a domain-invariant feature generator is exploited to embed the source and target data into a latent common space with the guidance of discriminative cross-domain alignment.
arXiv Detail & Related papers (2020-08-27T01:29:10Z) - Cross-domain Face Presentation Attack Detection via Multi-domain
Disentangled Representation Learning [109.42987031347582]
Face presentation attack detection (PAD) has been an urgent problem to be solved in the face recognition systems.
We propose an efficient disentangled representation learning for cross-domain face PAD.
Our approach consists of disentangled representation learning (DR-Net) and multi-domain learning (MD-Net)
arXiv Detail & Related papers (2020-04-04T15:45:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.