Machine Learning Assisted Adjustment Boosts Efficiency of Exact Inference in Randomized Controlled Trials
- URL: http://arxiv.org/abs/2403.03058v2
- Date: Mon, 22 Jul 2024 17:57:56 GMT
- Title: Machine Learning Assisted Adjustment Boosts Efficiency of Exact Inference in Randomized Controlled Trials
- Authors: Han Yu, Alan D. Hutson, Xiaoyi Ma,
- Abstract summary: We show the proposed method can robustly control the type I error and can boost the statistical efficiency for a randomized controlled trial (RCT)
Its application may remarkably reduce the required sample size and cost of RCTs, such as phase III clinical trials.
- Score: 12.682443719767763
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we proposed a novel inferential procedure assisted by machine learning based adjustment for randomized control trials. The method was developed under the Rosenbaum's framework of exact tests in randomized experiments with covariate adjustments. Through extensive simulation experiments, we showed the proposed method can robustly control the type I error and can boost the statistical efficiency for a randomized controlled trial (RCT). This advantage was further demonstrated in a real-world example. The simplicity, flexibility, and robustness of the proposed method makes it a competitive candidate as a routine inference procedure for RCTs, especially when nonlinear association or interaction among covariates is expected. Its application may remarkably reduce the required sample size and cost of RCTs, such as phase III clinical trials.
Related papers
- Adaptive Experimentation When You Can't Experiment [55.86593195947978]
This paper introduces the emphconfounded pure exploration transductive linear bandit (textttCPET-LB) problem.
Online services can employ a properly randomized encouragement that incentivizes users toward a specific treatment.
arXiv Detail & Related papers (2024-06-15T20:54:48Z) - Adaptive Instrument Design for Indirect Experiments [48.815194906471405]
Unlike RCTs, indirect experiments estimate treatment effects by leveragingconditional instrumental variables.
In this paper we take the initial steps towards enhancing sample efficiency for indirect experiments by adaptively designing a data collection policy.
Our main contribution is a practical computational procedure that utilizes influence functions to search for an optimal data collection policy.
arXiv Detail & Related papers (2023-12-05T02:38:04Z) - A Weighted Prognostic Covariate Adjustment Method for Efficient and
Powerful Treatment Effect Inferences in Randomized Controlled Trials [0.28087862620958753]
A crucial task for a randomized controlled trial (RCT) is to specify a statistical method that can yield an efficient estimator and powerful test for the treatment effect.
Training a generative AI algorithm on historical control data enables one to construct a digital twin generator (DTG) for RCT participants.
DTG generates a probability distribution for RCT participants' potential control outcome.
arXiv Detail & Related papers (2023-09-25T16:14:13Z) - Task-specific experimental design for treatment effect estimation [59.879567967089145]
Large randomised trials (RCTs) are the standard for causal inference.
Recent work has proposed more sample-efficient alternatives to RCTs, but these are not adaptable to the downstream application for which the causal effect is sought.
We develop a task-specific approach to experimental design and derive sampling strategies customised to particular downstream applications.
arXiv Detail & Related papers (2023-06-08T18:10:37Z) - A Causal Inference Framework for Leveraging External Controls in Hybrid
Trials [1.7942265700058988]
We consider the challenges associated with causal inference in settings where data from a randomized trial is augmented with control data from an external source.
We propose estimators, review efficiency bounds, and an approach for efficient doubly-robust estimation.
We apply the framework to a trial investigating the effect of risdisplam on motor function in patients with spinal muscular atrophy.
arXiv Detail & Related papers (2023-05-15T19:15:32Z) - Improved Policy Evaluation for Randomized Trials of Algorithmic Resource
Allocation [54.72195809248172]
We present a new estimator leveraging our proposed novel concept, that involves retrospective reshuffling of participants across experimental arms at the end of an RCT.
We prove theoretically that such an estimator is more accurate than common estimators based on sample means.
arXiv Detail & Related papers (2023-02-06T05:17:22Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Synthetic Design: An Optimization Approach to Experimental Design with
Synthetic Controls [5.3063411515511065]
We investigate the optimal design of experimental studies that have pre-treatment outcome data available.
The average treatment effect is estimated as the difference between the weighted average outcomes of the treated and control units.
We propose several methods for choosing the set of treated units in conjunction with the weights.
arXiv Detail & Related papers (2021-12-01T05:05:26Z) - Predictive machine learning for prescriptive applications: a coupled
training-validating approach [77.34726150561087]
We propose a new method for training predictive machine learning models for prescriptive applications.
This approach is based on tweaking the validation step in the standard training-validating-testing scheme.
Several experiments with synthetic data demonstrate promising results in reducing the prescription costs in both deterministic and real models.
arXiv Detail & Related papers (2021-10-22T15:03:20Z) - AdaPT-GMM: Powerful and robust covariate-assisted multiple testing [0.7614628596146599]
We propose a new empirical Bayes method for co-assisted multiple testing with false discovery rate (FDR) control.
Our method refines the adaptive p-value thresholding (AdaPT) procedure by generalizing its masking scheme.
We show in extensive simulations and real data examples that our new method, which we call AdaPT-GMM, consistently delivers high power.
arXiv Detail & Related papers (2021-06-30T05:06:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.