A Normative Model of Classifier Fusion
- URL: http://arxiv.org/abs/2106.01770v1
- Date: Thu, 3 Jun 2021 11:52:13 GMT
- Title: A Normative Model of Classifier Fusion
- Authors: Susanne Trick, Constantin A. Rothkopf
- Abstract summary: We present a hierarchical Bayesian model of probabilistic classification fusion based on a new correlated Dirichlet distribution.
The proposed model naturally accommodates the classic Independent Opinion Pool and other independent fusion algorithms as special cases.
It is evaluated by uncertainty reduction and correctness of fusion on synthetic and real-world data sets.
- Score: 4.111899441919164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Combining the outputs of multiple classifiers or experts into a single
probabilistic classification is a fundamental task in machine learning with
broad applications from classifier fusion to expert opinion pooling. Here we
present a hierarchical Bayesian model of probabilistic classifier fusion based
on a new correlated Dirichlet distribution. This distribution explicitly models
positive correlations between marginally Dirichlet-distributed random vectors
thereby allowing normative modeling of correlations between base classifiers or
experts. The proposed model naturally accommodates the classic Independent
Opinion Pool and other independent fusion algorithms as special cases. It is
evaluated by uncertainty reduction and correctness of fusion on synthetic and
real-world data sets. We show that a change in performance of the fused
classifier due to uncertainty reduction can be Bayes optimal even for highly
correlated base classifiers.
Related papers
- Operator-informed score matching for Markov diffusion models [8.153690483716481]
This paper argues that Markov diffusion models enjoy an advantage over other types of diffusion model, as their associated operators can be exploited to improve the training process.
We propose operator-informed score matching, a variance reduction technique that is straightforward to implement in both low- and high-dimensional diffusion modeling.
arXiv Detail & Related papers (2024-06-13T13:07:52Z) - Obtaining Explainable Classification Models using Distributionally
Robust Optimization [12.511155426574563]
We study generalized linear models constructed using sets of feature value rules.
An inherent trade-off exists between rule set sparsity and its prediction accuracy.
We propose a new formulation to learn an ensemble of rule sets that simultaneously addresses these competing factors.
arXiv Detail & Related papers (2023-11-03T15:45:34Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Rethinking Log Odds: Linear Probability Modelling and Expert Advice in
Interpretable Machine Learning [8.831954614241234]
We introduce a family of interpretable machine learning models, with two broad additions: Linearised Additive Models (LAMs) and SubscaleHedge.
LAMs replace the ubiquitous logistic link function in General Additive Models (GAMs); and SubscaleHedge is an expert advice algorithm for combining base models trained on subsets of features called subscales.
arXiv Detail & Related papers (2022-11-11T17:21:57Z) - ER: Equivariance Regularizer for Knowledge Graph Completion [107.51609402963072]
We propose a new regularizer, namely, Equivariance Regularizer (ER)
ER can enhance the generalization ability of the model by employing the semantic equivariance between the head and tail entities.
The experimental results indicate a clear and substantial improvement over the state-of-the-art relation prediction methods.
arXiv Detail & Related papers (2022-06-24T08:18:05Z) - Soft-margin classification of object manifolds [0.0]
A neural population responding to multiple appearances of a single object defines a manifold in the neural response space.
The ability to classify such manifold is of interest, as object recognition and other computational tasks require a response that is insensitive to variability within a manifold.
Soft-margin classifiers are a larger class of algorithms and provide an additional regularization parameter used in applications to optimize performance outside the training set.
arXiv Detail & Related papers (2022-03-14T12:23:36Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.