Bayesian Prognostic Covariate Adjustment With Additive Mixture Priors
- URL: http://arxiv.org/abs/2310.18027v4
- Date: Wed, 28 Feb 2024 18:57:15 GMT
- Title: Bayesian Prognostic Covariate Adjustment With Additive Mixture Priors
- Authors: Alyssa M. Vanderbeek and Arman Sabbaghi and Jon R. Walsh and Charles
K. Fisher
- Abstract summary: We propose a new Bayesian prognostic covariate adjustment methodology, referred to as Bayesian PROCOVA.
It is based on generative artificial intelligence (AI) algorithms that construct a digital twin generator (DTG) for RCT participants.
The DTG is trained on historical control data and yields a digital twin (DT) probability distribution for each RCT participant's outcome under the control treatment.
We establish an efficient Gibbs algorithm for sampling from the posterior distribution, and derive closed-form expressions for the posterior mean and variance of the treatment effect parameter conditional on the weight.
- Score: 0.3749861135832073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effective and rapid decision-making from randomized controlled trials (RCTs)
requires unbiased and precise treatment effect inferences. Two strategies to
address this requirement are to adjust for covariates that are highly
correlated with the outcome, and to leverage historical control information via
Bayes' theorem. We propose a new Bayesian prognostic covariate adjustment
methodology, referred to as Bayesian PROCOVA, that combines these two
strategies. Covariate adjustment in Bayesian PROCOVA is based on generative
artificial intelligence (AI) algorithms that construct a digital twin generator
(DTG) for RCT participants. The DTG is trained on historical control data and
yields a digital twin (DT) probability distribution for each RCT participant's
outcome under the control treatment. The expectation of the DT distribution,
referred to as the prognostic score, defines the covariate for adjustment.
Historical control information is leveraged via an additive mixture prior with
two components: an informative prior probability distribution specified based
on historical control data, and a weakly informative prior distribution. The
mixture weight determines the extent to which posterior inferences are drawn
from the informative component, versus the weakly informative component. This
weight has a prior distribution as well, and so the entire additive mixture
prior is completely pre-specifiable without involving any RCT information. We
establish an efficient Gibbs algorithm for sampling from the posterior
distribution, and derive closed-form expressions for the posterior mean and
variance of the treatment effect parameter conditional on the weight, in
Bayesian PROCOVA. We evaluate efficiency gains of Bayesian PROCOVA via its bias
control and variance reduction compared to frequentist PROCOVA in simulation
studies that encompass different discrepancies. These gains translate to
smaller RCTs.
Related papers
- Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - TIC-TAC: A Framework for Improved Covariance Estimation in Deep Heteroscedastic Regression [109.69084997173196]
Deepscedastic regression involves jointly optimizing the mean and covariance of the predicted distribution using the negative log-likelihood.
Recent works show that this may result in sub-optimal convergence due to the challenges associated with covariance estimation.
We study two questions: (1) Does the predicted covariance truly capture the randomness of the predicted mean?
Our results show that not only does TIC accurately learn the covariance, it additionally facilitates an improved convergence of the negative log-likelihood.
arXiv Detail & Related papers (2023-10-29T09:54:03Z) - A Weighted Prognostic Covariate Adjustment Method for Efficient and
Powerful Treatment Effect Inferences in Randomized Controlled Trials [0.28087862620958753]
A crucial task for a randomized controlled trial (RCT) is to specify a statistical method that can yield an efficient estimator and powerful test for the treatment effect.
Training a generative AI algorithm on historical control data enables one to construct a digital twin generator (DTG) for RCT participants.
DTG generates a probability distribution for RCT participants' potential control outcome.
arXiv Detail & Related papers (2023-09-25T16:14:13Z) - A Bayesian Semiparametric Method For Estimating Causal Quantile Effects [1.1118668841431563]
We propose a semiparametric conditional distribution regression model that allows inference on any functionals of counterfactual distributions.
We show via simulations that the use of double balancing score for confounding adjustment improves performance over adjusting for any single score alone.
We apply the proposed method to the North Carolina birth weight dataset to analyze the effect of maternal smoking on infant's birth weight.
arXiv Detail & Related papers (2022-11-03T05:15:18Z) - Learning to Re-weight Examples with Optimal Transport for Imbalanced
Classification [74.62203971625173]
Imbalanced data pose challenges for deep learning based classification models.
One of the most widely-used approaches for tackling imbalanced data is re-weighting.
We propose a novel re-weighting method based on optimal transport (OT) from a distributional point of view.
arXiv Detail & Related papers (2022-08-05T01:23:54Z) - Sample-Efficient Optimisation with Probabilistic Transformer Surrogates [66.98962321504085]
This paper investigates the feasibility of employing state-of-the-art probabilistic transformers in Bayesian optimisation.
We observe two drawbacks stemming from their training procedure and loss definition, hindering their direct deployment as proxies in black-box optimisation.
We introduce two components: 1) a BO-tailored training prior supporting non-uniformly distributed points, and 2) a novel approximate posterior regulariser trading-off accuracy and input sensitivity to filter favourable stationary points for improved predictive performance.
arXiv Detail & Related papers (2022-05-27T11:13:17Z) - Bayesian Active Learning with Fully Bayesian Gaussian Processes [0.0]
In active learning, where labeled data is scarce or difficult to obtain, neglecting this trade-off can cause inefficient querying.
We show that incorporating the bias-variance trade-off in the acquisition functions mitigates unnecessary and expensive data labeling.
arXiv Detail & Related papers (2022-05-20T13:52:04Z) - Bayes in Wonderland! Predictive supervised classification inference hits
unpredictability [1.8814209805277506]
We show the convergence of the sBpc and mBpc under de Finetti type of exchangeability.
We also provide a parameter estimation of the generative model giving rise to the partition exchangeable sequence.
arXiv Detail & Related papers (2021-12-03T12:34:52Z) - AdaPT-GMM: Powerful and robust covariate-assisted multiple testing [0.7614628596146599]
We propose a new empirical Bayes method for co-assisted multiple testing with false discovery rate (FDR) control.
Our method refines the adaptive p-value thresholding (AdaPT) procedure by generalizing its masking scheme.
We show in extensive simulations and real data examples that our new method, which we call AdaPT-GMM, consistently delivers high power.
arXiv Detail & Related papers (2021-06-30T05:06:18Z) - Increasing the efficiency of randomized trial estimates via linear
adjustment for a prognostic score [59.75318183140857]
Estimating causal effects from randomized experiments is central to clinical research.
Most methods for historical borrowing achieve reductions in variance by sacrificing strict type-I error rate control.
arXiv Detail & Related papers (2020-12-17T21:10:10Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.