The Infinitesimal Jackknife and Combinations of Models
- URL: http://arxiv.org/abs/2209.00147v1
- Date: Wed, 31 Aug 2022 22:37:44 GMT
- Title: The Infinitesimal Jackknife and Combinations of Models
- Authors: Indrayudh Ghosal, Yunzhe Zhou and Giles Hooker
- Abstract summary: We extend the Infinitesimal Jackknife to estimate the covariance between any two models.
This can be used to quantify uncertainty for combinations of models, or to construct test statistics for comparing different models.
- Score: 2.457924087844968
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Infinitesimal Jackknife is a general method for estimating variances of
parametric models, and more recently also for some ensemble methods. In this
paper we extend the Infinitesimal Jackknife to estimate the covariance between
any two models. This can be used to quantify uncertainty for combinations of
models, or to construct test statistics for comparing different models or
ensembles of models fitted using the same training dataset. Specific examples
in this paper use boosted combinations of models like random forests and
M-estimators. We also investigate its application on neural networks and
ensembles of XGBoost models. We illustrate the efficacy of variance estimates
through extensive simulations and its application to the Beijing Housing data,
and demonstrate the theoretical consistency of the Infinitesimal Jackknife
covariance estimate.
Related papers
- Supervised Score-Based Modeling by Gradient Boosting [49.556736252628745]
We propose a Supervised Score-based Model (SSM) which can be viewed as a gradient boosting algorithm combining score matching.
We provide a theoretical analysis of learning and sampling for SSM to balance inference time and prediction accuracy.
Our model outperforms existing models in both accuracy and inference time.
arXiv Detail & Related papers (2024-11-02T07:06:53Z) - Fusion of Gaussian Processes Predictions with Monte Carlo Sampling [61.31380086717422]
In science and engineering, we often work with models designed for accurate prediction of variables of interest.
Recognizing that these models are approximations of reality, it becomes desirable to apply multiple models to the same data and integrate their outcomes.
arXiv Detail & Related papers (2024-03-03T04:21:21Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - Statistical inference for pairwise comparison models [5.487882744996216]
This paper establishes a near-optimal normality for the maximum likelihood in a broad class of pairwise comparison models.
The key idea lies in identifying the Fisher information matrix as a weighted graph Laplacian, which can be studied via a meticulous spectral analysis.
arXiv Detail & Related papers (2024-01-16T16:14:09Z) - Local Bayesian Dirichlet mixing of imperfect models [0.0]
We study the ability of Bayesian model averaging and mixing techniques to mine nuclear masses.
We show that the global and local mixtures of models reach excellent performance on both prediction accuracy and uncertainty quantification.
arXiv Detail & Related papers (2023-11-02T21:02:40Z) - Bayesian Additive Main Effects and Multiplicative Interaction Models
using Tensor Regression for Multi-environmental Trials [0.0]
We propose a Bayesian tensor regression model to accommodate the effect of multiple factors on phenotype prediction.
We adopt a set of prior distributions that resolve identifiability issues that may arise between the parameters in the model.
We explore the applicability of our model by analysing real-world data related to wheat production across Ireland from 2010 to 2019.
arXiv Detail & Related papers (2023-01-09T19:54:50Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Semi-nonparametric Latent Class Choice Model with a Flexible Class
Membership Component: A Mixture Model Approach [6.509758931804479]
The proposed model formulates the latent classes using mixture models as an alternative approach to the traditional random utility specification.
Results show that mixture models improve the overall performance of latent class choice models.
arXiv Detail & Related papers (2020-07-06T13:19:26Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z) - Amortized Bayesian model comparison with evidential deep learning [0.12314765641075436]
We propose a novel method for performing Bayesian model comparison using specialized deep learning architectures.
Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset.
We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work.
arXiv Detail & Related papers (2020-04-22T15:15:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.