Predicted Embedding Power Regression for Large-Scale Out-of-Distribution
Detection
- URL: http://arxiv.org/abs/2303.04115v1
- Date: Tue, 7 Mar 2023 18:28:39 GMT
- Title: Predicted Embedding Power Regression for Large-Scale Out-of-Distribution
Detection
- Authors: Hong Yang, William Gebhardt, Alexander G. Ororbia, Travis Desell
- Abstract summary: We develop a novel approach that calculates the probability of the predicted class label based on label distributions learned during the training process.
Our method performs better than current state-of-the-art methods with only a negligible increase in compute cost.
- Score: 77.1596426383046
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Out-of-distribution (OOD) inputs can compromise the performance and safety of
real world machine learning systems. While many methods exist for OOD detection
and work well on small scale datasets with lower resolution and few classes,
few methods have been developed for large-scale OOD detection. Existing
large-scale methods generally depend on maximum classification probability,
such as the state-of-the-art grouped softmax method. In this work, we develop a
novel approach that calculates the probability of the predicted class label
based on label distributions learned during the training process. Our method
performs better than current state-of-the-art methods with only a negligible
increase in compute cost. We evaluate our method against contemporary methods
across $14$ datasets and achieve a statistically significant improvement with
respect to AUROC (84.2 vs 82.4) and AUPR (96.2 vs 93.7).
Related papers
- A Rate-Distortion View of Uncertainty Quantification [36.85921945174863]
In supervised learning, understanding an input's proximity to the training data can help a model decide whether it has sufficient evidence for reaching a reliable prediction.
We introduce Distance Aware Bottleneck (DAB), a new method for enriching deep neural networks with this property.
arXiv Detail & Related papers (2024-06-16T01:33:22Z) - Optimal Parameter and Neuron Pruning for Out-of-Distribution Detection [36.4610463573214]
We propose an textbfOptimal textbfParameter and textbfNeuron textbfPruning (textbfOPNP) approach to detect out-of-distribution (OOD) samples.
Our proposal is training-free, compatible with other post-hoc methods, and exploring the information from all training data.
arXiv Detail & Related papers (2024-02-04T07:31:06Z) - How to Overcome Curse-of-Dimensionality for Out-of-Distribution
Detection? [29.668859994222238]
We propose a novel framework, Subspace Nearest Neighbor (SNN), for OOD detection.
In training, our method regularizes the model and its feature representation by leveraging the most relevant subset of dimensions.
Compared to the current best distance-based method, SNN reduces the average FPR95 by 15.96% on the CIFAR-100 benchmark.
arXiv Detail & Related papers (2023-12-22T06:04:09Z) - B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under
Hidden Confounding [51.74479522965712]
We propose a meta-learner called the B-Learner, which can efficiently learn sharp bounds on the CATE function under limits on hidden confounding.
We prove its estimates are valid, sharp, efficient, and have a quasi-oracle property with respect to the constituent estimators under more general conditions than existing methods.
arXiv Detail & Related papers (2023-04-20T18:07:19Z) - Boosting Out-of-Distribution Detection with Multiple Pre-trained Models [41.66566916581451]
Post hoc detection utilizing pre-trained models has shown promising performance and can be scaled to large-scale problems.
We propose a detection enhancement method by ensembling multiple detection decisions derived from a zoo of pre-trained models.
Our method substantially improves the relative performance by 65.40% and 26.96% on the CIFAR10 and ImageNet benchmarks.
arXiv Detail & Related papers (2022-12-24T12:11:38Z) - Weighted Ensemble Self-Supervised Learning [67.24482854208783]
Ensembling has proven to be a powerful technique for boosting model performance.
We develop a framework that permits data-dependent weighted cross-entropy losses.
Our method outperforms both in multiple evaluation metrics on ImageNet-1K.
arXiv Detail & Related papers (2022-11-18T02:00:17Z) - Uncertainty-based Meta-Reinforcement Learning for Robust Radar Tracking [3.012203489670942]
This paper proposes an uncertainty-based Meta-Reinforcement Learning (Meta-RL) approach with Out-of-Distribution (OOD) detection.
Using information about its complexity, the proposed algorithm is able to point out when tracking is reliable.
There, we show that our method outperforms related Meta-RL approaches on unseen tracking scenarios in peak performance by 16% and the baseline by 35%.
arXiv Detail & Related papers (2022-10-26T07:48:56Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Towards Reducing Labeling Cost in Deep Object Detection [61.010693873330446]
We propose a unified framework for active learning, that considers both the uncertainty and the robustness of the detector.
Our method is able to pseudo-label the very confident predictions, suppressing a potential distribution drift.
arXiv Detail & Related papers (2021-06-22T16:53:09Z) - Robust Out-of-distribution Detection for Neural Networks [51.19164318924997]
We show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs.
We propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples.
arXiv Detail & Related papers (2020-03-21T17:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.