Calibrated ensembles can mitigate accuracy tradeoffs under distribution
shift
- URL: http://arxiv.org/abs/2207.08977v1
- Date: Mon, 18 Jul 2022 23:14:44 GMT
- Title: Calibrated ensembles can mitigate accuracy tradeoffs under distribution
shift
- Authors: Ananya Kumar and Tengyu Ma and Percy Liang and Aditi Raghunathan
- Abstract summary: We find that ID-calibrated ensembles outperforms prior state-of-the-art (based on self-training) on both ID and OOD accuracy.
We analyze this method in stylized settings, and identify two important conditions for ensembles to perform well both ID and OOD.
- Score: 108.30303219703845
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We often see undesirable tradeoffs in robust machine learning where
out-of-distribution (OOD) accuracy is at odds with in-distribution (ID)
accuracy: a robust classifier obtained via specialized techniques such as
removing spurious features often has better OOD but worse ID accuracy compared
to a standard classifier trained via ERM. In this paper, we find that
ID-calibrated ensembles -- where we simply ensemble the standard and robust
models after calibrating on only ID data -- outperforms prior state-of-the-art
(based on self-training) on both ID and OOD accuracy. On eleven natural
distribution shift datasets, ID-calibrated ensembles obtain the best of both
worlds: strong ID accuracy and OOD accuracy. We analyze this method in stylized
settings, and identify two important conditions for ensembles to perform well
both ID and OOD: (1) we need to calibrate the standard and robust models (on ID
data, because OOD data is unavailable), (2) OOD has no anticorrelated spurious
features.
Related papers
- Robust Fine-tuning of Zero-shot Models via Variance Reduction [56.360865951192324]
When fine-tuning zero-shot models, our desideratum is for the fine-tuned model to excel in both in-distribution (ID) and out-of-distribution (OOD)
We propose a sample-wise ensembling technique that can simultaneously attain the best ID and OOD accuracy without the trade-offs.
arXiv Detail & Related papers (2024-11-11T13:13:39Z) - Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection [24.557227100200215]
Out-of-distribution (OOD) detection is crucial for deploying reliable machine learning models in open-world applications.
Recent advances in CLIP-based OOD detection have shown promising results via regularizing prompt tuning with OOD features extracted from ID data.
We propose a novel framework, namely, Self-Calibrated Tuning (SCT), to mitigate this problem for effective OOD detection with only the given few-shot ID data.
arXiv Detail & Related papers (2024-11-05T02:29:16Z) - RICASSO: Reinforced Imbalance Learning with Class-Aware Self-Supervised Outliers Exposure [21.809270017579806]
Deep learning models often face challenges from both imbalanced (long-tailed) and out-of-distribution (OOD) data.
Our research shows that data mixing can generate pseudo-OOD data that exhibit the features of both in-distribution (ID) data and OOD data.
We propose a unified framework called Reinforced Imbalance Learning with Class-Aware Self-Supervised Outliers Exposure (RICASSO)
arXiv Detail & Related papers (2024-10-14T14:29:32Z) - How Does Unlabeled Data Provably Help Out-of-Distribution Detection? [63.41681272937562]
Unlabeled in-the-wild data is non-trivial due to the heterogeneity of both in-distribution (ID) and out-of-distribution (OOD) data.
This paper introduces a new learning framework SAL (Separate And Learn) that offers both strong theoretical guarantees and empirical effectiveness.
arXiv Detail & Related papers (2024-02-05T20:36:33Z) - Distilling the Unknown to Unveil Certainty [66.29929319664167]
Out-of-distribution (OOD) detection is essential in identifying test samples that deviate from the in-distribution (ID) data upon which a standard network is trained.
This paper introduces OOD knowledge distillation, a pioneering learning framework applicable whether or not training ID data is available.
arXiv Detail & Related papers (2023-11-14T08:05:02Z) - Towards Calibrated Robust Fine-Tuning of Vision-Language Models [97.19901765814431]
This work proposes a robust fine-tuning method that improves both OOD accuracy and confidence calibration simultaneously in vision language models.
We show that both OOD classification and OOD calibration errors have a shared upper bound consisting of two terms of ID data.
Based on this insight, we design a novel framework that conducts fine-tuning with a constrained multimodal contrastive loss enforcing a larger smallest singular value.
arXiv Detail & Related papers (2023-11-03T05:41:25Z) - Training OOD Detectors in their Natural Habitats [31.565635192716712]
Out-of-distribution (OOD) detection is important for machine learning models deployed in the wild.
Recent methods use auxiliary outlier data to regularize the model for improved OOD detection.
We propose a novel framework that leverages wild mixture data -- that naturally consists of both ID and OOD samples.
arXiv Detail & Related papers (2022-02-07T15:38:39Z) - Provably Robust Detection of Out-of-distribution Data (almost) for free [124.14121487542613]
Deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data.
In this paper we propose a novel method where from first principles we combine a certifiable OOD detector with a standard classifier into an OOD aware classifier.
In this way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in prediction accuracy and close to state-of-the-art OOD detection performance for non-manipulated OOD data.
arXiv Detail & Related papers (2021-06-08T11:40:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.