Effective Out-of-Distribution Detection in Classifier Based on
PEDCC-Loss
- URL: http://arxiv.org/abs/2204.04665v1
- Date: Sun, 10 Apr 2022 11:47:29 GMT
- Title: Effective Out-of-Distribution Detection in Classifier Based on
PEDCC-Loss
- Authors: Qiuyu Zhu, Guohui Zheng, Yingying Yan
- Abstract summary: We propose an effective algorithm for detecting out-of-distribution examples utilizing PEDCC-Loss.
We mathematically analyze the nature of the confidence score output by the PEDCC (Predefined Evenly-Distribution Class Centroids) classifier.
We then construct a more effective scoring function to distinguish in-distribution (ID) and out-of-distribution.
- Score: 5.614122064282257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks suffer from the overconfidence issue in the open world,
meaning that classifiers could yield confident, incorrect predictions for
out-of-distribution (OOD) samples. Thus, it is an urgent and challenging task
to detect these samples drawn far away from training distribution based on the
security considerations of artificial intelligence. Many current methods based
on neural networks mainly rely on complex processing strategies, such as
temperature scaling and input preprocessing, to obtain satisfactory results. In
this paper, we propose an effective algorithm for detecting out-of-distribution
examples utilizing PEDCC-Loss. We mathematically analyze the nature of the
confidence score output by the PEDCC (Predefined Evenly-Distribution Class
Centroids) classifier, and then construct a more effective scoring function to
distinguish in-distribution (ID) and out-of-distribution. In this method, there
is no need to preprocess the input samples and the computational burden of the
algorithm is reduced. Experiments demonstrate that our method can achieve
better OOD detection performance.
Related papers
- Embedding Trajectory for Out-of-Distribution Detection in Mathematical Reasoning [50.84938730450622]
We propose a trajectory-based method TV score, which uses trajectory volatility for OOD detection in mathematical reasoning.
Our method outperforms all traditional algorithms on GLMs under mathematical reasoning scenarios.
Our method can be extended to more applications with high-density features in output spaces, such as multiple-choice questions.
arXiv Detail & Related papers (2024-05-22T22:22:25Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Beyond Mahalanobis-Based Scores for Textual OOD Detection [32.721317681946246]
We introduce TRUSTED, a new OOD detector for classifiers based on Transformer architectures that meets operational requirements.
The efficiency of TRUSTED relies on the fruitful idea that all hidden layers carry relevant information to detect OOD examples.
Our experiments involve 51k model configurations, including various checkpoints, seeds, datasets, and demonstrate that TRUSTED achieves state-of-the-art performances.
arXiv Detail & Related papers (2022-11-24T10:51:58Z) - Window-Based Distribution Shift Detection for Deep Neural Networks [21.73028341299301]
We study the case of monitoring the healthy operation of a deep neural network (DNN) receiving a stream of data.
Using selective prediction principles, we propose a distribution deviation detection method for DNNs.
Our novel detection method performs on-par or better than the state-of-the-art, while consuming substantially lower time.
arXiv Detail & Related papers (2022-10-19T21:27:25Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - WOOD: Wasserstein-based Out-of-Distribution Detection [6.163329453024915]
Training data for deep-neural-network-based classifiers are usually assumed to be sampled from the same distribution.
When part of the test samples are drawn from a distribution that is far away from that of the training samples, the trained neural network has a tendency to make high confidence predictions for these OOD samples.
We propose a Wasserstein-based out-of-distribution detection (WOOD) method to overcome these challenges.
arXiv Detail & Related papers (2021-12-13T02:35:15Z) - Hyperdimensional Computing for Efficient Distributed Classification with
Randomized Neural Networks [5.942847925681103]
We study distributed classification, which can be employed in situations were data cannot be stored at a central location nor shared.
We propose a more efficient solution for distributed classification by making use of a lossy compression approach applied when sharing the local classifiers with other agents.
arXiv Detail & Related papers (2021-06-02T01:33:56Z) - Adversarial Examples Detection with Bayesian Neural Network [57.185482121807716]
We propose a new framework to detect adversarial examples motivated by the observations that random components can improve the smoothness of predictors.
We propose a novel Bayesian adversarial example detector, short for BATer, to improve the performance of adversarial example detection.
arXiv Detail & Related papers (2021-05-18T15:51:24Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - An Algorithm for Out-Of-Distribution Attack to Neural Network Encoder [1.7305469511995404]
Out-Of-Distribution (OOD) samples do not follow the distribution of training set, and therefore the predicted class labels on OOD samples become meaningless.
We show that this type of method has no theoretical guarantee and is practically breakable by our OOD Attack algorithm.
We also show that Glow likelihood-based OOD detection is breakable as well.
arXiv Detail & Related papers (2020-09-17T02:10:36Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.