Natural Attribute-based Shift Detection
- URL: http://arxiv.org/abs/2110.09276v1
- Date: Mon, 18 Oct 2021 13:15:59 GMT
- Title: Natural Attribute-based Shift Detection
- Authors: Jeonghoon Park, Jimin Hong, Radhika Dua, Daehoon Gwak, Yixuan Li,
Jaegul Choo, Edward Choi
- Abstract summary: In vision, language, and healthcare, unpredictable behaviors on samples from the distribution different from the training distribution cause severe problems in deployment.
For better reliability of neural-network-based classifiers, we define a new task, natural attribute-based shift (NAS) detection.
Using the natural attributes present in existing datasets, we introduce benchmark datasets in vision, language, and medical for NAS detection.
- Score: 26.093019230426442
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the impressive performance of deep networks in vision, language, and
healthcare, unpredictable behaviors on samples from the distribution different
than the training distribution cause severe problems in deployment. For better
reliability of neural-network-based classifiers, we define a new task, natural
attribute-based shift (NAS) detection, to detect the samples shifted from the
training distribution by some natural attribute such as age of subjects or
brightness of images. Using the natural attributes present in existing
datasets, we introduce benchmark datasets in vision, language, and medical for
NAS detection. Further, we conduct an extensive evaluation of prior
representative out-of-distribution (OOD) detection methods on NAS datasets and
observe an inconsistency in their performance. To understand this, we provide
an analysis on the relationship between the location of NAS samples in the
feature space and the performance of distance- and confidence-based OOD
detection methods. Based on the analysis, we split NAS samples into three
categories and further suggest a simple modification to the training objective
to obtain an improved OOD detection method that is capable of detecting samples
from all NAS categories.
Related papers
- Advancing Out-of-Distribution Detection via Local Neuroplasticity [60.53625435889467]
This paper presents a novel OOD detection method that leverages the unique local neuroplasticity property of Kolmogorov-Arnold Networks (KANs)
Our method compares the activation patterns of a trained KAN against its untrained counterpart to detect OOD samples.
We validate our approach on benchmarks from image and medical domains, demonstrating superior performance and robustness compared to state-of-the-art techniques.
arXiv Detail & Related papers (2025-02-20T11:13:41Z) - Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - Toward a Realistic Benchmark for Out-of-Distribution Detection [3.8038269045375515]
We introduce a comprehensive benchmark for OOD detection based on ImageNet and Places365.
Several techniques can be used to determine which classes should be considered in-distribution, yielding benchmarks with varying properties.
arXiv Detail & Related papers (2024-04-16T11:29:43Z) - A Functional Data Perspective and Baseline On Multi-Layer
Out-of-Distribution Detection [30.499548939422194]
Methods that explore the multiple layers either require a special architecture or a supervised objective to do so.
This work adopts an original approach based on a functional view of the network that exploits the sample's trajectories through the various layers and their statistical dependencies.
We validate our method and empirically demonstrate its effectiveness in OOD detection compared to strong state-of-the-art baselines on computer vision benchmarks.
arXiv Detail & Related papers (2023-06-06T09:14:05Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - A Novel Explainable Out-of-Distribution Detection Approach for Spiking
Neural Networks [6.100274095771616]
This work presents a novel OoD detector that can identify whether test examples input to a Spiking Neural Network belong to the distribution of the data over which it was trained.
We characterize the internal activations of the hidden layers of the network in the form of spike count patterns.
A local explanation method is devised to produce attribution maps revealing which parts of the input instance push most towards the detection of an example as an OoD sample.
arXiv Detail & Related papers (2022-09-30T11:16:35Z) - Batch-Ensemble Stochastic Neural Networks for Out-of-Distribution
Detection [55.028065567756066]
Out-of-distribution (OOD) detection has recently received much attention from the machine learning community due to its importance in deploying machine learning models in real-world applications.
In this paper we propose an uncertainty quantification approach by modelling the distribution of features.
We incorporate an efficient ensemble mechanism, namely batch-ensemble, to construct the batch-ensemble neural networks (BE-SNNs) and overcome the feature collapse problem.
We show that BE-SNNs yield superior performance on several OOD benchmarks, such as the Two-Moons dataset, the FashionMNIST vs MNIST dataset, FashionM
arXiv Detail & Related papers (2022-06-26T16:00:22Z) - Igeood: An Information Geometry Approach to Out-of-Distribution
Detection [35.04325145919005]
We introduce Igeood, an effective method for detecting out-of-distribution (OOD) samples.
Igeood applies to any pre-trained neural network, works under various degrees of access to the machine learning model.
We show that Igeood outperforms competing state-of-the-art methods on a variety of network architectures and datasets.
arXiv Detail & Related papers (2022-03-15T11:26:35Z) - WOOD: Wasserstein-based Out-of-Distribution Detection [6.163329453024915]
Training data for deep-neural-network-based classifiers are usually assumed to be sampled from the same distribution.
When part of the test samples are drawn from a distribution that is far away from that of the training samples, the trained neural network has a tendency to make high confidence predictions for these OOD samples.
We propose a Wasserstein-based out-of-distribution detection (WOOD) method to overcome these challenges.
arXiv Detail & Related papers (2021-12-13T02:35:15Z) - Triggering Failures: Out-Of-Distribution detection by learning from
local adversarial attacks in Semantic Segmentation [76.2621758731288]
We tackle the detection of out-of-distribution (OOD) objects in semantic segmentation.
Our main contribution is a new OOD detection architecture called ObsNet associated with a dedicated training scheme based on Local Adversarial Attacks (LAA)
We show it obtains top performances both in speed and accuracy when compared to ten recent methods of the literature on three different datasets.
arXiv Detail & Related papers (2021-08-03T17:09:56Z) - Adversarial Examples Detection with Bayesian Neural Network [57.185482121807716]
We propose a new framework to detect adversarial examples motivated by the observations that random components can improve the smoothness of predictors.
We propose a novel Bayesian adversarial example detector, short for BATer, to improve the performance of adversarial example detection.
arXiv Detail & Related papers (2021-05-18T15:51:24Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.