Towards Consistent Predictive Confidence through Fitted Ensembles
- URL: http://arxiv.org/abs/2106.12070v1
- Date: Tue, 22 Jun 2021 21:32:31 GMT
- Title: Towards Consistent Predictive Confidence through Fitted Ensembles
- Authors: Navid Kardan, Ankit Sharma and Kenneth O. Stanley
- Abstract summary: This paper introduces separable concept learning framework to measure the performance of classifiers in presence of OOD examples.
We present a new strong baseline for more consistent predictive confidence in deep models, called fitted ensembles.
- Score: 6.371992222487036
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are behind many of the recent successes in machine
learning applications. However, these models can produce overconfident
decisions while encountering out-of-distribution (OOD) examples or making a
wrong prediction. This inconsistent predictive confidence limits the
integration of independently-trained learning models into a larger system. This
paper introduces separable concept learning framework to realistically measure
the performance of classifiers in presence of OOD examples. In this setup,
several instances of a classifier are trained on different parts of a partition
of the set of classes. Later, the performance of the combination of these
models is evaluated on a separate test set. Unlike current OOD detection
techniques, this framework does not require auxiliary OOD datasets and does not
separate classification from detection performance. Furthermore, we present a
new strong baseline for more consistent predictive confidence in deep models,
called fitted ensembles, where overconfident predictions are rectified by
transformed versions of the original classification task. Fitted ensembles can
naturally detect OOD examples without requiring auxiliary data by observing
contradicting predictions among its components. Experiments on MNIST, SVHN,
CIFAR-10/100, and ImageNet show fitted ensemble significantly outperform
conventional ensembles on OOD examples and are possible to scale.
Related papers
- Non-Linear Outlier Synthesis for Out-of-Distribution Detection [5.019613806273252]
We present NCIS, which enhances the quality of synthetic outliers by operating directly in the diffusion's model embedding space.
We demonstrate that these improvements yield new state-of-the-art OOD detection results on standard ImageNet100 and CIFAR100 benchmarks.
arXiv Detail & Related papers (2024-11-20T09:47:29Z) - DPU: Dynamic Prototype Updating for Multimodal Out-of-Distribution Detection [10.834698906236405]
Out-of-distribution (OOD) detection is essential for ensuring the robustness of machine learning models.
Recent advances in multimodal models have demonstrated the potential of leveraging multiple modalities to enhance detection performance.
We propose Dynamic Prototype Updating (DPU), a novel plug-and-play framework for multimodal OOD detection.
arXiv Detail & Related papers (2024-11-12T22:43:16Z) - Scalable Ensemble Diversification for OOD Generalization and Detection [68.8982448081223]
SED identifies hard training samples on the fly and encourages the ensemble members to disagree on these.
We show how to avoid the expensive computations in existing methods of exhaustive pairwise disagreements across models.
For OOD generalization, we observe large benefits from the diversification in multiple settings including output-space (classical) ensembles and weight-space ensembles (model soups)
arXiv Detail & Related papers (2024-09-25T10:30:24Z) - Towards Calibrated Robust Fine-Tuning of Vision-Language Models [97.19901765814431]
This work proposes a robust fine-tuning method that improves both OOD accuracy and confidence calibration simultaneously in vision language models.
We show that both OOD classification and OOD calibration errors have a shared upper bound consisting of two terms of ID data.
Based on this insight, we design a novel framework that conducts fine-tuning with a constrained multimodal contrastive loss enforcing a larger smallest singular value.
arXiv Detail & Related papers (2023-11-03T05:41:25Z) - Large Class Separation is not what you need for Relational
Reasoning-based OOD Detection [12.578844450586]
Out-Of-Distribution (OOD) detection methods provide a solution by identifying semantic novelty.
Most of these methods leverage a learning stage on the known data, which means training (or fine-tuning) a model to capture the concept of normality.
A viable alternative is that of evaluating similarities in the embedding space produced by large pre-trained models without any further learning effort.
arXiv Detail & Related papers (2023-07-12T14:10:15Z) - Pseudo-OOD training for robust language models [78.15712542481859]
OOD detection is a key component of a reliable machine-learning model for any industry-scale application.
We propose POORE - POsthoc pseudo-Ood REgularization, that generates pseudo-OOD samples using in-distribution (IND) data.
We extensively evaluate our framework on three real-world dialogue systems, achieving new state-of-the-art in OOD detection.
arXiv Detail & Related papers (2022-10-17T14:32:02Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - EARLIN: Early Out-of-Distribution Detection for Resource-efficient
Collaborative Inference [4.826988182025783]
Collaborative inference enables resource-constrained edge devices to make inferences by uploading inputs to a server.
While this setup works cost-effectively for successful inferences, it severely underperforms when the model faces input samples on which the model was not trained.
We propose a novel lightweight OOD detection approach that mines important features from the shallow layers of a pretrained CNN model.
arXiv Detail & Related papers (2021-06-25T18:43:23Z) - Mean Embeddings with Test-Time Data Augmentation for Ensembling of
Representations [8.336315962271396]
We look at the ensembling of representations and propose mean embeddings with test-time augmentation (MeTTA)
MeTTA significantly boosts the quality of linear evaluation on ImageNet for both supervised and self-supervised models.
We believe that spreading the success of ensembles to inference higher-quality representations is the important step that will open many new applications of ensembling.
arXiv Detail & Related papers (2021-06-15T10:49:46Z) - Trusted Multi-View Classification [76.73585034192894]
We propose a novel multi-view classification method, termed trusted multi-view classification.
It provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
The proposed algorithm jointly utilizes multiple views to promote both classification reliability and robustness.
arXiv Detail & Related papers (2021-02-03T13:30:26Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.