Joint Distribution across Representation Space for Out-of-Distribution
Detection
- URL: http://arxiv.org/abs/2103.12344v1
- Date: Tue, 23 Mar 2021 06:39:29 GMT
- Title: Joint Distribution across Representation Space for Out-of-Distribution
Detection
- Authors: JingWei Xu, Siyuan Zhu, Zenan Li, Chang Xu
- Abstract summary: We present a novel outlook on in-distribution data in a generative manner, which takes their latent features generated from each hidden layer as a joint distribution across representation spaces.
We first construct the Gaussian Mixture Model (GMM) based on in-distribution latent features for each hidden layer, and then connect GMMs via the transition probabilities of the inference traces.
- Score: 16.96466730536722
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) have become a key part of many modern software
applications. After training and validating, the DNN is deployed as an
irrevocable component and applied in real-world scenarios. Although most DNNs
are built meticulously with huge volumes of training data, data in real-world
still remain unknown to the DNN model, which leads to the crucial requirement
of runtime out-of-distribution (OOD) detection. However, many existing
approaches 1) need OOD data for classifier training or parameter tuning, or 2)
simply combine the scores of each hidden layer as an ensemble of features for
OOD detection. In this paper, we present a novel outlook on in-distribution
data in a generative manner, which takes their latent features generated from
each hidden layer as a joint distribution across representation spaces. Since
only the in-distribution latent features are comprehensively understood in
representation space, the internal difference between in-distribution and OOD
data can be naturally revealed without the intervention of any OOD data.
Specifically, We construct a generative model, called Latent Sequential
Gaussian Mixture (LSGM), to depict how the in-distribution latent features are
generated in terms of the trace of DNN inference across representation spaces.
We first construct the Gaussian Mixture Model (GMM) based on in-distribution
latent features for each hidden layer, and then connect GMMs via the transition
probabilities of the inference traces. Experimental evaluations on popular
benchmark OOD datasets and models validate the superiority of the proposed
method over the state-of-the-art methods in OOD detection.
Related papers
- Diffusion-based Layer-wise Semantic Reconstruction for Unsupervised Out-of-Distribution Detection [30.02748131967826]
Unsupervised out-of-distribution (OOD) detection aims to identify out-of-domain data by learning only from unlabeled In-Distribution (ID) training samples.
Current reconstruction-based methods provide a good alternative approach by measuring the reconstruction error between the input and its corresponding generative counterpart in the pixel/feature space.
We propose the diffusion-based layer-wise semantic reconstruction approach for unsupervised OOD detection.
arXiv Detail & Related papers (2024-11-16T04:54:07Z) - GOODAT: Towards Test-time Graph Out-of-Distribution Detection [103.40396427724667]
Graph neural networks (GNNs) have found widespread application in modeling graph data across diverse domains.
Recent studies have explored graph OOD detection, often focusing on training a specific model or modifying the data on top of a well-trained GNN.
This paper introduces a data-centric, unsupervised, and plug-and-play solution that operates independently of training data and modifications of GNN architecture.
arXiv Detail & Related papers (2024-01-10T08:37:39Z) - GROOD: GRadient-aware Out-Of-Distribution detection in interpolated
manifolds [12.727088216619386]
Out-of-distribution detection in deep neural networks (DNNs) can pose risks in real-world deployments.
We introduce GRadient-aware Out-Of-Distribution detection in.
internative manifold (GROOD), a novel framework that relies on the discriminative power of gradient space.
We show that GROD surpasses the established robustness of state-of-the-art baselines.
arXiv Detail & Related papers (2023-12-22T04:28:43Z) - DIVERSIFY: A General Framework for Time Series Out-of-distribution
Detection and Generalization [58.704753031608625]
Time series is one of the most challenging modalities in machine learning research.
OOD detection and generalization on time series tend to suffer due to its non-stationary property.
We propose DIVERSIFY, a framework for OOD detection and generalization on dynamic distributions of time series.
arXiv Detail & Related papers (2023-08-04T12:27:11Z) - WDiscOOD: Out-of-Distribution Detection via Whitened Linear Discriminant
Analysis [21.023001428704085]
We propose a novel feature-space OOD detection score based on class-specific and class-agnostic information.
The efficacy of our method, named WDiscOOD, is verified on the large-scale ImageNet-1k benchmark.
arXiv Detail & Related papers (2023-03-14T00:13:57Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Batch-Ensemble Stochastic Neural Networks for Out-of-Distribution
Detection [55.028065567756066]
Out-of-distribution (OOD) detection has recently received much attention from the machine learning community due to its importance in deploying machine learning models in real-world applications.
In this paper we propose an uncertainty quantification approach by modelling the distribution of features.
We incorporate an efficient ensemble mechanism, namely batch-ensemble, to construct the batch-ensemble neural networks (BE-SNNs) and overcome the feature collapse problem.
We show that BE-SNNs yield superior performance on several OOD benchmarks, such as the Two-Moons dataset, the FashionMNIST vs MNIST dataset, FashionM
arXiv Detail & Related papers (2022-06-26T16:00:22Z) - Igeood: An Information Geometry Approach to Out-of-Distribution
Detection [35.04325145919005]
We introduce Igeood, an effective method for detecting out-of-distribution (OOD) samples.
Igeood applies to any pre-trained neural network, works under various degrees of access to the machine learning model.
We show that Igeood outperforms competing state-of-the-art methods on a variety of network architectures and datasets.
arXiv Detail & Related papers (2022-03-15T11:26:35Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z) - Brainstorming Generative Adversarial Networks (BGANs): Towards
Multi-Agent Generative Models with Distributed Private Datasets [70.62568022925971]
generative adversarial networks (GANs) must be fed by large datasets that adequately represent the data space.
In many scenarios, the available datasets may be limited and distributed across multiple agents, each of which is seeking to learn the distribution of the data on its own.
In this paper, a novel brainstorming GAN (BGAN) architecture is proposed using which multiple agents can generate real-like data samples while operating in a fully distributed manner.
arXiv Detail & Related papers (2020-02-02T02:58:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.