Detecting Out-of-distribution Examples via Class-conditional Impressions
Reappearing
- URL: http://arxiv.org/abs/2303.09746v1
- Date: Fri, 17 Mar 2023 02:55:08 GMT
- Title: Detecting Out-of-distribution Examples via Class-conditional Impressions
Reappearing
- Authors: Jinggang Chen, Xiaoyang Qu, Junjie Li, Jianzong Wang, Jiguang Wan,
Jing Xiao
- Abstract summary: Out-of-distribution (OOD) detection aims at enhancing standard deep neural networks to distinguish anomalous inputs from original training data.
Due to privacy and security, auxiliary data tends to be impractical in a real-world scenario.
We propose a data-free method without training on natural data, called Class-Conditional Impressions Reappearing (C2IR)
- Score: 30.938412222724608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) detection aims at enhancing standard deep neural
networks to distinguish anomalous inputs from original training data. Previous
progress has introduced various approaches where the in-distribution training
data and even several OOD examples are prerequisites. However, due to privacy
and security, auxiliary data tends to be impractical in a real-world scenario.
In this paper, we propose a data-free method without training on natural data,
called Class-Conditional Impressions Reappearing (C2IR), which utilizes image
impressions from the fixed model to recover class-conditional feature
statistics. Based on that, we introduce Integral Probability Metrics to
estimate layer-wise class-conditional deviations and obtain layer weights by
Measuring Gradient-based Importance (MGI). The experiments verify the
effectiveness of our method and indicate that C2IR outperforms other post-hoc
methods and reaches comparable performance to the full access (ID and OOD)
detection method, especially in the far-OOD dataset (SVHN).
Related papers
- FlowCon: Out-of-Distribution Detection using Flow-Based Contrastive Learning [0.0]
We introduce textitFlowCon, a new density-based OOD detection technique.
Our main innovation lies in efficiently combining the properties of normalizing flow with supervised contrastive learning.
Empirical evaluation shows the enhanced performance of our method across common vision datasets.
arXiv Detail & Related papers (2024-07-03T20:33:56Z) - EAT: Towards Long-Tailed Out-of-Distribution Detection [55.380390767978554]
This paper addresses the challenging task of long-tailed OOD detection.
The main difficulty lies in distinguishing OOD data from samples belonging to the tail classes.
We propose two simple ideas: (1) Expanding the in-distribution class space by introducing multiple abstention classes, and (2) Augmenting the context-limited tail classes by overlaying images onto the context-rich OOD data.
arXiv Detail & Related papers (2023-12-14T13:47:13Z) - Out-of-distribution Object Detection through Bayesian Uncertainty
Estimation [10.985423935142832]
We propose a novel, intuitive, and scalable probabilistic object detection method for OOD detection.
Our method is able to distinguish between in-distribution (ID) data and OOD data via weight parameter sampling from proposed Gaussian distributions.
We demonstrate that our Bayesian object detector can achieve satisfactory OOD identification performance by reducing the FPR95 score by up to 8.19% and increasing the AUROC score by up to 13.94% when trained on BDD100k and VOC datasets.
arXiv Detail & Related papers (2023-10-29T19:10:52Z) - Conservative Prediction via Data-Driven Confidence Minimization [70.93946578046003]
In safety-critical applications of machine learning, it is often desirable for a model to be conservative.
We propose the Data-Driven Confidence Minimization framework, which minimizes confidence on an uncertainty dataset.
arXiv Detail & Related papers (2023-06-08T07:05:36Z) - LINe: Out-of-Distribution Detection by Leveraging Important Neurons [15.797257361788812]
We introduce a new aspect for analyzing the difference in model outputs between in-distribution data and OOD data.
We propose a novel method, Leveraging Important Neurons (LINe), for post-hoc Out of distribution detection.
arXiv Detail & Related papers (2023-03-24T13:49:05Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Detecting Out-of-Distribution Examples with In-distribution Examples and
Gram Matrices [8.611328447624679]
Deep neural networks yield confident, incorrect predictions when presented with Out-of-Distribution examples.
In this paper, we propose to detect OOD examples by identifying inconsistencies between activity patterns and class predicted.
We find that characterizing activity patterns by Gram matrices and identifying anomalies in gram matrix values can yield high OOD detection rates.
arXiv Detail & Related papers (2019-12-28T19:44:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.