The Hitchhiker's Guide to Prior-Shift Adaptation
- URL: http://arxiv.org/abs/2106.11695v1
- Date: Tue, 22 Jun 2021 11:55:51 GMT
- Title: The Hitchhiker's Guide to Prior-Shift Adaptation
- Authors: Tomas Sipka, Milan Sulc, Jiri Matas
- Abstract summary: We propose a novel method to address a known issue of prior estimation methods based on confusion matrices.
Experiments on fine-grained image classification datasets provide insight into the best practice of prior shift estimation.
Applying the best practice to two tasks with naturally imbalanced priors, learning from web-crawled images and plant species classification increased the recognition accuracy by 1.1% and 3.4% respectively.
- Score: 41.4341627937948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many computer vision classification tasks, class priors at test time often
differ from priors on the training set. In the case of such prior shift,
classifiers must be adapted correspondingly to maintain close to optimal
performance. This paper analyzes methods for adaptation of probabilistic
classifiers to new priors and for estimating new priors on an unlabeled test
set. We propose a novel method to address a known issue of prior estimation
methods based on confusion matrices, where inconsistent estimates of decision
probabilities and confusion matrices lead to negative values in the estimated
priors. Experiments on fine-grained image classification datasets provide
insight into the best practice of prior shift estimation and classifier
adaptation and show that the proposed method achieves state-of-the-art results
in prior adaptation. Applying the best practice to two tasks with naturally
imbalanced priors, learning from web-crawled images and plant species
classification, increased the recognition accuracy by 1.1% and 3.4%
respectively.
Related papers
- Bayesian Test-Time Adaptation for Vision-Language Models [51.93247610195295]
Test-time adaptation with pre-trained vision-language models, such as CLIP, aims to adapt the model to new, potentially out-of-distribution test data.
We propose a novel approach, textbfBayesian textbfClass textbfAdaptation (BCA), which in addition to continuously updating class embeddings to adapt likelihood, also uses the posterior of incoming samples to continuously update the prior for each class embedding.
arXiv Detail & Related papers (2025-03-12T10:42:11Z) - Prior2Posterior: Model Prior Correction for Long-Tailed Learning [0.41248472494152805]
We propose a novel approach to accurately model the effective prior of a trained model using textita posteriori probabilities.
We show that the proposed approach achieves new state-of-the-art (SOTA) on several benchmark datasets from the long-tail literature.
arXiv Detail & Related papers (2024-12-21T08:49:02Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - Learning Acceptance Regions for Many Classes with Anomaly Detection [19.269724165953274]
Many existing set-valued classification methods do not consider the possibility that a new class that never appeared in the training data appears in the test data.
We propose a Generalized Prediction Set (GPS) approach to estimate the acceptance regions while considering the possibility of a new class in the test data.
Unlike previous methods, the proposed method achieves a good balance between accuracy, efficiency, and anomaly detection rate.
arXiv Detail & Related papers (2022-09-20T19:40:33Z) - Towards Diverse Evaluation of Class Incremental Learning: A Representation Learning Perspective [67.45111837188685]
Class incremental learning (CIL) algorithms aim to continually learn new object classes from incrementally arriving data.
We experimentally analyze neural network models trained by CIL algorithms using various evaluation protocols in representation learning.
arXiv Detail & Related papers (2022-06-16T11:44:11Z) - Self-Certifying Classification by Linearized Deep Assignment [65.0100925582087]
We propose a novel class of deep predictors for classifying metric data on graphs within PAC-Bayes risk certification paradigm.
Building on the recent PAC-Bayes literature and data-dependent priors, this approach enables learning posterior distributions on the hypothesis space.
arXiv Detail & Related papers (2022-01-26T19:59:14Z) - Metalearning Linear Bandits by Prior Update [7.519872646378836]
Fully Bayesian approaches assume that problem parameters are generated from a known prior, while in practice, such information is often lacking.
This problem is exacerbated in decision-making setups with partial information, where using a misspecified prior may lead to poor exploration and inferior performance.
In this work we prove, in the context of linear bandits and Gaussian priors, that as long as the prior estimate is sufficiently close to the true prior, the performance of an algorithm that uses the misspecified prior is close to that of an algorithm that uses the true prior.
arXiv Detail & Related papers (2021-07-12T11:17:01Z) - Adaptive calibration for binary classification [0.20072624123275526]
This is important in applications of machine learning, where the quality of a trained predictor may drop significantly in the process of its exploitation.
Our techniques are based on recent work on conformal test martingales and older work on prediction with expert advice, namely tracking the best expert.
arXiv Detail & Related papers (2021-07-04T20:32:52Z) - Integrated Optimization of Predictive and Prescriptive Tasks [0.0]
We propose a new framework directly integrating predictive tasks under prescriptive tasks.
We train the parameters of predictive algorithm within a prescription problem via bilevel optimization techniques.
arXiv Detail & Related papers (2021-01-02T02:43:10Z) - Performance-Agnostic Fusion of Probabilistic Classifier Outputs [2.4206828137867107]
We propose a method for combining probabilistic outputs of classifiers to make a single consensus class prediction.
Our proposed method works well in situations where accuracy is the performance metric.
It does not output calibrated probabilities, so it is not suitable in situations where such probabilities are required for further processing.
arXiv Detail & Related papers (2020-09-01T16:53:29Z) - Pre-training Is (Almost) All You Need: An Application to Commonsense
Reasoning [61.32992639292889]
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks.
We introduce a new scoring method that casts a plausibility ranking task in a full-text format.
We show that our method provides a much more stable training phase across random restarts.
arXiv Detail & Related papers (2020-04-29T10:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.