Learn to Accumulate Evidence from All Training Samples: Theory and
Practice
- URL: http://arxiv.org/abs/2306.11113v2
- Date: Sat, 24 Jun 2023 17:45:51 GMT
- Title: Learn to Accumulate Evidence from All Training Samples: Theory and
Practice
- Authors: Deep Pandey and Qi Yu
- Abstract summary: Evidential deep learning offers a principled and computationally efficient way to turn a deterministic neural network uncertainty-aware.
Existing evidential activation functions create zero evidence regions, which prevent the model to learn from training samples falling into such regions.
A deeper analysis of evidential activation functions based on our theoretical underpinning inspires the design of a novel regularizer.
- Score: 7.257751371276488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Evidential deep learning, built upon belief theory and subjective logic,
offers a principled and computationally efficient way to turn a deterministic
neural network uncertainty-aware. The resultant evidential models can quantify
fine-grained uncertainty using the learned evidence. To ensure theoretically
sound evidential models, the evidence needs to be non-negative, which requires
special activation functions for model training and inference. This constraint
often leads to inferior predictive performance compared to standard softmax
models, making it challenging to extend them to many large-scale datasets. To
unveil the real cause of this undesired behavior, we theoretically investigate
evidential models and identify a fundamental limitation that explains the
inferior performance: existing evidential activation functions create zero
evidence regions, which prevent the model to learn from training samples
falling into such regions. A deeper analysis of evidential activation functions
based on our theoretical underpinning inspires the design of a novel
regularizer that effectively alleviates this fundamental limitation. Extensive
experiments over many challenging real-world datasets and settings confirm our
theoretical findings and demonstrate the effectiveness of our proposed
approach.
Related papers
- Cross-Entropy Is All You Need To Invert the Data Generating Process [29.94396019742267]
Empirical phenomena suggest that supervised models can learn interpretable factors of variation in a linear fashion.
Recent advances in self-supervised learning have shown that these methods can recover latent structures by inverting the data generating process.
We prove that even in standard classification tasks, models learn representations of ground-truth factors of variation up to a linear transformation.
arXiv Detail & Related papers (2024-10-29T09:03:57Z) - Lean-STaR: Learning to Interleave Thinking and Proving [53.923617816215774]
We present Lean-STaR, a framework for training language models to produce informal thoughts prior to each step of a proof.
Lean-STaR achieves state-of-the-art results on the miniF2F-test benchmark within the Lean theorem proving environment.
arXiv Detail & Related papers (2024-07-14T01:43:07Z) - Uncertainty Regularized Evidential Regression [5.874234972285304]
The Evidential Regression Network (ERN) represents a novel approach that integrates deep learning with Dempster-Shafer's theory.
Specific activation functions must be employed to enforce non-negative values, which is a constraint that compromises model performance.
This paper provides a theoretical analysis of this limitation and introduces an improvement to overcome it.
arXiv Detail & Related papers (2024-01-03T01:18:18Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Neuro-symbolic model for cantilever beams damage detection [0.0]
We propose a neuro-symbolic model for the detection of damages in cantilever beams based on a novel cognitive architecture.
The hybrid discriminative model is introduced under the name Logic Convolutional Neural Regressor.
arXiv Detail & Related papers (2023-05-04T13:12:39Z) - A Theoretical Study of Inductive Biases in Contrastive Learning [32.98250585760665]
We provide the first theoretical analysis of self-supervised learning that incorporates the effect of inductive biases originating from the model class.
We show that when the model has limited capacity, contrastive representations would recover certain special clustering structures that are compatible with the model architecture.
arXiv Detail & Related papers (2022-11-27T01:53:29Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - Learnability of Competitive Threshold Models [11.005966612053262]
We study the learnability of the competitive threshold model from a theoretical perspective.
We demonstrate how competitive threshold models can be seamlessly simulated by artificial neural networks.
arXiv Detail & Related papers (2022-05-08T01:11:51Z) - Deep Active Learning by Leveraging Training Dynamics [57.95155565319465]
We propose a theory-driven deep active learning method (dynamicAL) which selects samples to maximize training dynamics.
We show that dynamicAL not only outperforms other baselines consistently but also scales well on large deep learning models.
arXiv Detail & Related papers (2021-10-16T16:51:05Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.