Self-Certifying Classification by Linearized Deep Assignment
- URL: http://arxiv.org/abs/2201.11162v1
- Date: Wed, 26 Jan 2022 19:59:14 GMT
- Title: Self-Certifying Classification by Linearized Deep Assignment
- Authors: Bastian Boll, Alexander Zeilmann, Stefania Petra, Christoph Schn\"orr
- Abstract summary: We propose a novel class of deep predictors for classifying metric data on graphs within PAC-Bayes risk certification paradigm.
Building on the recent PAC-Bayes literature and data-dependent priors, this approach enables learning posterior distributions on the hypothesis space.
- Score: 65.0100925582087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel class of deep stochastic predictors for classifying metric
data on graphs within the PAC-Bayes risk certification paradigm. Classifiers
are realized as linearly parametrized deep assignment flows with random initial
conditions. Building on the recent PAC-Bayes literature and data-dependent
priors, this approach enables (i) to use risk bounds as training objectives for
learning posterior distributions on the hypothesis space and (ii) to compute
tight out-of-sample risk certificates of randomized classifiers more
efficiently than related work. Comparison with empirical test set errors
illustrates the performance and practicality of this self-certifying
classification method.
Related papers
- Risk-based Calibration for Probabilistic Classifiers [4.792851066169872]
We introduce a general iterative procedure called risk-based calibration (RC) to minimize the empirical risk under the 0-1 loss.
RC improves the empirical error of the original closed-form learning algorithms and, more notably, consistently outperforms the gradient descent approach.
arXiv Detail & Related papers (2024-09-05T14:06:56Z) - Learning Robust Classifiers with Self-Guided Spurious Correlation Mitigation [26.544938760265136]
Deep neural classifiers rely on spurious correlations between spurious attributes of inputs and targets to make predictions.
We propose a self-guided spurious correlation mitigation framework.
We show that training the classifier to distinguish different prediction behaviors reduces its reliance on spurious correlations without knowing them a priori.
arXiv Detail & Related papers (2024-05-06T17:12:21Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Adaptive conformal classification with noisy labels [22.33857704379073]
The paper develops novel conformal prediction methods for classification tasks that can automatically adapt to random label contamination in the calibration sample.
This is made possible by a precise characterization of the effective coverage inflation suffered by standard conformal inferences in the presence of label contamination.
The advantages of the proposed methods are demonstrated through extensive simulations and an application to object classification with the CIFAR-10H image data set.
arXiv Detail & Related papers (2023-09-10T17:35:43Z) - Adaptive Dimension Reduction and Variational Inference for Transductive
Few-Shot Classification [2.922007656878633]
We propose a new clustering method based on Variational Bayesian inference, further improved by Adaptive Dimension Reduction.
Our proposed method significantly improves accuracy in the realistic unbalanced transductive setting on various Few-Shot benchmarks.
arXiv Detail & Related papers (2022-09-18T10:29:02Z) - Risk Consistent Multi-Class Learning from Label Proportions [64.0125322353281]
This study addresses a multiclass learning from label proportions (MCLLP) setting in which training instances are provided in bags.
Most existing MCLLP methods impose bag-wise constraints on the prediction of instances or assign them pseudo-labels.
A risk-consistent method is proposed for instance classification using the empirical risk minimization framework.
arXiv Detail & Related papers (2022-03-24T03:49:04Z) - When in Doubt: Improving Classification Performance with Alternating
Normalization [57.39356691967766]
We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification.
CAN improves classification accuracy for challenging examples by re-adjusting their predicted class probability distribution.
We empirically demonstrate its effectiveness across a diverse set of classification tasks.
arXiv Detail & Related papers (2021-09-28T02:55:42Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.