On Quantification of Borrowing of Information in Hierarchical Bayesian Models
- URL: http://arxiv.org/abs/2509.17301v1
- Date: Mon, 22 Sep 2025 00:48:27 GMT
- Title: On Quantification of Borrowing of Information in Hierarchical Bayesian Models
- Authors: Prasenjit Ghosh, Anirban Bhattacharya, Debdeep Pati,
- Abstract summary: We investigate the role of shared hyper parameters in a hierarchical Bayesian model.<n>We show that the model with a deeper hierarchy tends to outperform the nested model.<n>These findings have significant implications for Bayesian modeling.
- Score: 2.842028685390757
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we offer a thorough analytical investigation into the role of shared hyperparameters in a hierarchical Bayesian model, examining their impact on information borrowing and posterior inference. Our approach is rooted in a non-asymptotic framework, where observations are drawn from a mixed-effects model, and a Gaussian distribution is assumed for the true effect generator. We consider a nested hierarchical prior distribution model to capture these effects and use the posterior means for Bayesian estimation. To quantify the effect of information borrowing, we propose an integrated risk measure relative to the true data-generating distribution. Our analysis reveals that the Bayes estimator for the model with a deeper hierarchy performs better, provided that the unknown random effects are correlated through a compound symmetric structure. Our work also identifies necessary and sufficient conditions for this model to outperform the one nested within it. We further obtain sufficient conditions when the correlation is perturbed. Our study suggests that the model with a deeper hierarchy tends to outperform the nested model unless the true data-generating distribution favors sufficiently independent groups. These findings have significant implications for Bayesian modeling, and we believe they will be of interest to researchers across a wide range of fields.
Related papers
- A Likelihood Based Approach to Distribution Regression Using Conditional Deep Generative Models [6.647819824559201]
We study the large-sample properties of a likelihood-based approach for estimating conditional deep generative models.
Our results lead to the convergence rate of a sieve maximum likelihood estimator for estimating the conditional distribution.
arXiv Detail & Related papers (2024-10-02T20:46:21Z) - Estimating Causal Effects from Learned Causal Networks [56.14597641617531]
We propose an alternative paradigm for answering causal-effect queries over discrete observable variables.
We learn the causal Bayesian network and its confounding latent variables directly from the observational data.
We show that this emphmodel completion learning approach can be more effective than estimand approaches.
arXiv Detail & Related papers (2024-08-26T08:39:09Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Robust Estimation of Causal Heteroscedastic Noise Models [7.568978862189266]
Student's $t$-distribution is known for its robustness in accounting for sampling variability with smaller sample sizes and extreme values without significantly altering the overall distribution shape.
Our empirical evaluations demonstrate that our estimators are more robust and achieve better overall performance across synthetic and real benchmarks.
arXiv Detail & Related papers (2023-12-15T02:26:35Z) - Bivariate Causal Discovery using Bayesian Model Selection [11.726586969589]
We show how to incorporate causal assumptions within the Bayesian framework.
This enables us to construct models with realistic assumptions.
We then outperform previous methods on a wide range of benchmark datasets.
arXiv Detail & Related papers (2023-06-05T14:51:05Z) - Score Approximation, Estimation and Distribution Recovery of Diffusion
Models on Low-Dimensional Data [68.62134204367668]
This paper studies score approximation, estimation, and distribution recovery of diffusion models, when data are supported on an unknown low-dimensional linear subspace.
We show that with a properly chosen neural network architecture, the score function can be both accurately approximated and efficiently estimated.
The generated distribution based on the estimated score function captures the data geometric structures and converges to a close vicinity of the data distribution.
arXiv Detail & Related papers (2023-02-14T17:02:35Z) - Bayesian Additive Main Effects and Multiplicative Interaction Models using Tensor Regression for Multi-environmental Trials [0.0]
We propose a Bayesian tensor regression model to accommodate the effect of multiple factors on phenotype prediction.<n>We adopt a set of prior distributions that resolve identifiability issues that may arise between the parameters in the model.<n>We also incorporate a spike-and-slab structure that identifies which interactions are relevant for inclusion in the linear predictor.
arXiv Detail & Related papers (2023-01-09T19:54:50Z) - Accuracy on the Line: On the Strong Correlation Between
Out-of-Distribution and In-Distribution Generalization [89.73665256847858]
We show that out-of-distribution performance is strongly correlated with in-distribution performance for a wide range of models and distribution shifts.
Specifically, we demonstrate strong correlations between in-distribution and out-of-distribution performance on variants of CIFAR-10 & ImageNet.
We also investigate cases where the correlation is weaker, for instance some synthetic distribution shifts from CIFAR-10-C and the tissue classification dataset Camelyon17-WILDS.
arXiv Detail & Related papers (2021-07-09T19:48:23Z) - A Twin Neural Model for Uplift [59.38563723706796]
Uplift is a particular case of conditional treatment effect modeling.
We propose a new loss function defined by leveraging a connection with the Bayesian interpretation of the relative risk.
We show our proposed method is competitive with the state-of-the-art in simulation setting and on real data from large scale randomized experiments.
arXiv Detail & Related papers (2021-05-11T16:02:39Z) - Bayesian Model Averaging for Causality Estimation and its Approximation
based on Gaussian Scale Mixture Distributions [0.0]
We first show from a Bayesian perspective that it is Bayes optimal to weight (average) the causal effects estimated under each model.
We develop an approximation to the Bayes optimal estimator by using Gaussian scale mixture distributions.
arXiv Detail & Related papers (2021-03-15T08:07:58Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.