G-Transformer: Counterfactual Outcome Prediction under Dynamic and Time-varying Treatment Regimes
- URL: http://arxiv.org/abs/2406.05504v4
- Date: Thu, 26 Sep 2024 00:00:34 GMT
- Title: G-Transformer: Counterfactual Outcome Prediction under Dynamic and Time-varying Treatment Regimes
- Authors: Hong Xiong, Feng Wu, Leon Deng, Megan Su, Li-wei H Lehman,
- Abstract summary: We present G-Transformer for counterfactual outcome prediction under dynamic and time-varying treatment strategies.
Our approach leverages a Transformer architecture to capture complex, long-range dependencies in time-varying covariates.
G-Transformer outperforms both classical and state-of-the-art counterfactual prediction models in these settings.
- Score: 29.250837221920925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the context of medical decision making, counterfactual prediction enables clinicians to predict treatment outcomes of interest under alternative courses of therapeutic actions given observed patient history. In this work, we present G-Transformer for counterfactual outcome prediction under dynamic and time-varying treatment strategies. Our approach leverages a Transformer architecture to capture complex, long-range dependencies in time-varying covariates while enabling g-computation, a causal inference method for estimating the effects of dynamic treatment regimes. Specifically, we use a Transformer-based encoder architecture to estimate the conditional distribution of relevant covariates given covariate and treatment history at each time point, then produces Monte Carlo estimates of counterfactual outcomes by simulating forward patient trajectories under treatment strategies of interest. We evaluate G-Transformer extensively using two simulated longitudinal datasets from mechanistic models, and a real-world sepsis ICU dataset from MIMIC-IV. G-Transformer outperforms both classical and state-of-the-art counterfactual prediction models in these settings. To the best of our knowledge, this is the first Transformer-based architecture that supports g-computation for counterfactual outcome prediction under dynamic and time-varying treatment strategies.
Related papers
- Interpreting Affine Recurrence Learning in GPT-style Transformers [54.01174470722201]
In-context learning allows GPT-style transformers to generalize during inference without modifying their weights.
This paper focuses specifically on their ability to learn and predict affine recurrences as an ICL task.
We analyze the model's internal operations using both empirical and theoretical approaches.
arXiv Detail & Related papers (2024-10-22T21:30:01Z) - G-Transformer for Conditional Average Potential Outcome Estimation over Time [25.068617118126824]
The G-transformer (GT) is a novel, neural end-to-end model which adjusts for time-varying confounders.
Our GT is the first neural model to perform regression-based iterative G-computation for CAPOs in the time-varying setting.
arXiv Detail & Related papers (2024-05-31T16:52:51Z) - Longitudinal Targeted Minimum Loss-based Estimation with Temporal-Difference Heterogeneous Transformer [7.451436112917229]
We propose a novel approach to estimate the counterfactual mean of outcome under dynamic treatment policies in longitudinal problem settings.
Our approach utilizes a transformer architecture with heterogeneous type embedding trained using temporal-difference learning.
Our method also facilitates statistical inference by enabling the provision of 95% confidence intervals grounded in statistical theory.
arXiv Detail & Related papers (2024-04-05T20:56:15Z) - Causal Dynamic Variational Autoencoder for Counterfactual Regression in
Longitudinal Data [3.662229789022107]
Estimating treatment effects over time is relevant in many real-world applications, such as precision medicine, epidemiology, economy, and marketing.
We take a different perspective by assuming unobserved risk factors, i.e., adjustment variables that affect only the sequence of outcomes.
We address the challenges posed by time-varying effects and unobserved adjustment variables.
arXiv Detail & Related papers (2023-10-16T16:32:35Z) - Prediction of Post-Operative Renal and Pulmonary Complications Using
Transformers [69.81176740997175]
We evaluate the performance of transformer-based models in predicting postoperative acute renal failure, pulmonary complications, and postoperative in-hospital mortality.
Our results demonstrate that transformer-based models can achieve superior performance in predicting postoperative complications and outperform traditional machine learning models.
arXiv Detail & Related papers (2023-06-01T14:08:05Z) - Causal Transformer for Estimating Counterfactual Outcomes [18.640006398066188]
Estimating counterfactual outcomes over time from observational data is relevant for many applications.
We develop a novel Causal Transformer for estimating counterfactual outcomes over time.
Our model is specifically designed to capture complex, long-range dependencies among time-varying confounders.
arXiv Detail & Related papers (2022-04-14T22:40:09Z) - Disentangled Counterfactual Recurrent Networks for Treatment Effect
Inference over Time [71.30985926640659]
We introduce the Disentangled Counterfactual Recurrent Network (DCRN), a sequence-to-sequence architecture that estimates treatment outcomes over time.
With an architecture that is completely inspired by the causal structure of treatment influence over time, we advance forecast accuracy and disease understanding.
We demonstrate that DCRN outperforms current state-of-the-art methods in forecasting treatment responses, on both real and simulated data.
arXiv Detail & Related papers (2021-12-07T16:40:28Z) - Transformers for prompt-level EMA non-response prediction [62.41658786277712]
Ecological Momentary Assessments (EMAs) are an important psychological data source for measuring cognitive states, affect, behavior, and environmental factors.
Non-response, in which participants fail to respond to EMA prompts, is an endemic problem.
The ability to accurately predict non-response could be utilized to improve EMA delivery and develop compliance interventions.
arXiv Detail & Related papers (2021-11-01T18:38:47Z) - STELAR: Spatio-temporal Tensor Factorization with Latent Epidemiological
Regularization [76.57716281104938]
We develop a tensor method to predict the evolution of epidemic trends for many regions simultaneously.
STELAR enables long-term prediction by incorporating latent temporal regularization through a system of discrete-time difference equations.
We conduct experiments using both county- and state-level COVID-19 data and show that our model can identify interesting latent patterns of the epidemic.
arXiv Detail & Related papers (2020-12-08T21:21:47Z) - DeepRite: Deep Recurrent Inverse TreatmEnt Weighting for Adjusting
Time-varying Confounding in Modern Longitudinal Observational Data [68.29870617697532]
We propose Deep Recurrent Inverse TreatmEnt weighting (DeepRite) for time-varying confounding in longitudinal data.
DeepRite is shown to recover the ground truth from synthetic data, and estimate unbiased treatment effects from real data.
arXiv Detail & Related papers (2020-10-28T15:05:08Z) - G-Net: A Deep Learning Approach to G-computation for Counterfactual
Outcome Prediction Under Dynamic Treatment Regimes [11.361895456942374]
G-computation is a method for estimating expected counterfactual outcomes under dynamic time-varying treatment strategies.
This paper introduces G-Net, a novel sequential deep learning framework for G-computation.
We evaluate alternative G-Net implementations using realistically complex temporal simulated data obtained from CVSim.
arXiv Detail & Related papers (2020-03-23T21:08:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.