Normalized Label Distribution: Towards Learning Calibrated, Adaptable
and Efficient Activation Maps
- URL: http://arxiv.org/abs/2012.06876v1
- Date: Sat, 12 Dec 2020 17:54:01 GMT
- Title: Normalized Label Distribution: Towards Learning Calibrated, Adaptable
and Efficient Activation Maps
- Authors: Utkarsh Uppal, Bharat Giddwani
- Abstract summary: Vulnerability of models to data aberrations and adversarial attacks influences their ability to demarcate distinct class boundaries efficiently.
We study the significance of ground-truth distribution changes on the performance and generalizability of various state-of-the-art networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The vulnerability of models to data aberrations and adversarial attacks
influences their ability to demarcate distinct class boundaries efficiently.
The network's confidence and uncertainty play a pivotal role in weight
adjustments and the extent of acknowledging such attacks. In this paper, we
address the trade-off between the accuracy and calibration potential of a
classification network. We study the significance of ground-truth distribution
changes on the performance and generalizability of various state-of-the-art
networks and compare the proposed method's response to unanticipated attacks.
Furthermore, we demonstrate the role of label-smoothing regularization and
normalization in yielding better generalizability and calibrated probability
distribution by proposing normalized soft labels to enhance the calibration of
feature maps. Subsequently, we substantiate our inference by translating
conventional convolutions to padding based partial convolution to establish the
tangible impact of corrections in reinforcing the performance and convergence
rate. We graphically elucidate the implication of such variations with the
critical purpose of corroborating the reliability and reproducibility for
multiple datasets.
Related papers
- Harnessing the Power of Vicinity-Informed Analysis for Classification under Covariate Shift [9.530897053573186]
Transfer learning enhances prediction accuracy on a target distribution by leveraging data from a source distribution.
This paper introduces a novel dissimilarity measure that utilizes vicinity information, i.e., the local structure of data points.
We characterize the excess error using the proposed measure and demonstrate faster or competitive convergence rates compared to previous techniques.
arXiv Detail & Related papers (2024-05-27T07:55:27Z) - Binary Classification with Confidence Difference [100.08818204756093]
This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification.
We propose a risk-consistent approach to tackle this problem and show that the estimation error bound the optimal convergence rate.
We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven.
arXiv Detail & Related papers (2023-10-09T11:44:50Z) - The Lipschitz-Variance-Margin Tradeoff for Enhanced Randomized Smoothing [85.85160896547698]
Real-life applications of deep neural networks are hindered by their unsteady predictions when faced with noisy inputs and adversarial attacks.
We show how to design an efficient classifier with a certified radius by relying on noise injection into the inputs.
Our novel certification procedure allows us to use pre-trained models with randomized smoothing, effectively improving the current certification radius in a zero-shot manner.
arXiv Detail & Related papers (2023-09-28T22:41:47Z) - Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - ARBEx: Attentive Feature Extraction with Reliability Balancing for Robust Facial Expression Learning [5.648318448953635]
ARBEx is a novel attentive feature extraction framework driven by Vision Transformer.
We employ learnable anchor points in the embedding space with label distributions and multi-head self-attention mechanism to optimize performance against weak predictions.
Our strategy outperforms current state-of-the-art methodologies, according to extensive experiments conducted in a variety of contexts.
arXiv Detail & Related papers (2023-05-02T15:10:01Z) - A Fuzzy-set-based Joint Distribution Adaptation Method for Regression
and its Application to Online Damage Quantification for Structural Digital
Twin [1.3008516948825726]
This study first proposes a novel domain adaptation method, the Online Fuzzy-set-based Joint Distribution Adaptation for Regression.
By converting the continuous real-valued labels to fuzzy class labels via fuzzy sets, the conditional distribution discrepancy is measured.
A framework of online damage quantification integrated with the proposed domain adaptation method is presented.
arXiv Detail & Related papers (2022-11-03T13:09:08Z) - Adaptive Dimension Reduction and Variational Inference for Transductive
Few-Shot Classification [2.922007656878633]
We propose a new clustering method based on Variational Bayesian inference, further improved by Adaptive Dimension Reduction.
Our proposed method significantly improves accuracy in the realistic unbalanced transductive setting on various Few-Shot benchmarks.
arXiv Detail & Related papers (2022-09-18T10:29:02Z) - Certifying Model Accuracy under Distribution Shifts [151.67113334248464]
We present provable robustness guarantees on the accuracy of a model under bounded Wasserstein shifts of the data distribution.
We show that a simple procedure that randomizes the input of the model within a transformation space is provably robust to distributional shifts under the transformation.
arXiv Detail & Related papers (2022-01-28T22:03:50Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - GenDICE: Generalized Offline Estimation of Stationary Values [108.17309783125398]
We show that effective estimation can still be achieved in important applications.
Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions.
The resulting algorithm, GenDICE, is straightforward and effective.
arXiv Detail & Related papers (2020-02-21T00:27:52Z) - Calibrate and Prune: Improving Reliability of Lottery Tickets Through
Prediction Calibration [40.203492372949576]
Supervised models with uncalibrated confidences tend to be overconfident even when making wrong prediction.
We study how explicit confidence calibration in the over- parameterized network impacts the quality of the resulting lottery tickets.
Our empirical studies reveal that including calibration mechanisms consistently lead to more effective lottery tickets.
arXiv Detail & Related papers (2020-02-10T15:42:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.