Estimating Causal Effects using a Multi-task Deep Ensemble
- URL: http://arxiv.org/abs/2301.11351v3
- Date: Sat, 27 May 2023 11:31:52 GMT
- Title: Estimating Causal Effects using a Multi-task Deep Ensemble
- Authors: Ziyang Jiang, Zhuoran Hou, Yiling Liu, Yiman Ren, Keyu Li, David
Carlson
- Abstract summary: Causal Multi-task Deep Ensemble (CMDE) is a novel framework that learns both shared and group-specific information from the study population.
We evaluate our method across various types of datasets and tasks and find that CMDE outperforms state-of-the-art methods on a majority of these tasks.
- Score: 4.268861137988059
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A number of methods have been proposed for causal effect estimation, yet few
have demonstrated efficacy in handling data with complex structures, such as
images. To fill this gap, we propose Causal Multi-task Deep Ensemble (CMDE), a
novel framework that learns both shared and group-specific information from the
study population. We provide proofs demonstrating equivalency of CDME to a
multi-task Gaussian process (GP) with a coregionalization kernel a priori.
Compared to multi-task GP, CMDE efficiently handles high-dimensional and
multi-modal covariates and provides pointwise uncertainty estimates of causal
effects. We evaluate our method across various types of datasets and tasks and
find that CMDE outperforms state-of-the-art methods on a majority of these
tasks.
Related papers
- Provable Benefits of Multi-task RL under Non-Markovian Decision Making
Processes [56.714690083118406]
In multi-task reinforcement learning (RL) under Markov decision processes (MDPs), the presence of shared latent structures has been shown to yield significant benefits to the sample efficiency compared to single-task RL.
We investigate whether such a benefit can extend to more general sequential decision making problems, such as partially observable MDPs (POMDPs) and more general predictive state representations (PSRs)
We propose a provably efficient algorithm UMT-PSR for finding near-optimal policies for all PSRs, and demonstrate that the advantage of multi-task learning manifests if the joint model class of PSR
arXiv Detail & Related papers (2023-10-20T14:50:28Z) - Heterogeneous Multi-Task Gaussian Cox Processes [61.67344039414193]
We present a novel extension of multi-task Gaussian Cox processes for modeling heterogeneous correlated tasks jointly.
A MOGP prior over the parameters of the dedicated likelihoods for classification, regression and point process tasks can facilitate sharing of information between heterogeneous tasks.
We derive a mean-field approximation to realize closed-form iterative updates for estimating model parameters.
arXiv Detail & Related papers (2023-08-29T15:01:01Z) - Quantifying & Modeling Multimodal Interactions: An Information
Decomposition Framework [89.8609061423685]
We propose an information-theoretic approach to quantify the degree of redundancy, uniqueness, and synergy relating input modalities with an output task.
To validate PID estimation, we conduct extensive experiments on both synthetic datasets where the PID is known and on large-scale multimodal benchmarks.
We demonstrate their usefulness in (1) quantifying interactions within multimodal datasets, (2) quantifying interactions captured by multimodal models, (3) principled approaches for model selection, and (4) three real-world case studies.
arXiv Detail & Related papers (2023-02-23T18:59:05Z) - Scalable Batch Acquisition for Deep Bayesian Active Learning [70.68403899432198]
In deep active learning, it is important to choose multiple examples to markup at each step.
Existing solutions to this problem, such as BatchBALD, have significant limitations in selecting a large number of examples.
We present the Large BatchBALD algorithm, which aims to achieve comparable quality while being more computationally efficient.
arXiv Detail & Related papers (2023-01-13T11:45:17Z) - Counterfactual Learning with Multioutput Deep Kernels [0.0]
In this paper, we address the challenge of performing counterfactual inference with observational data.
We present a general class of counterfactual multi-task deep kernels models that estimate causal effects and learn policies proficiently.
arXiv Detail & Related papers (2022-11-20T23:28:41Z) - Scalable Multi-Task Gaussian Processes with Neural Embedding of
Coregionalization [9.873139480223367]
Multi-task regression attempts to exploit the task similarity in order to achieve knowledge transfer across related tasks for performance improvement.
The linear model of coregionalization (LMC) is a well-known MTGP paradigm which exploits the dependency of tasks through linear combination of several independent and diverse GPs.
We develop the neural embedding of coregionalization that transforms the latent GPs into a high-dimensional latent space to induce rich yet diverse behaviors.
arXiv Detail & Related papers (2021-09-20T01:28:14Z) - Multi-task Causal Learning with Gaussian Processes [17.205106391379026]
This paper studies the problem of learning the correlation structure of a set of intervention functions defined on the directed acyclic graph (DAG) of a causal model.
We propose the first multi-task causal Gaussian process (GP) model, which allows for information sharing across continuous interventions and experiments on different variables.
arXiv Detail & Related papers (2020-09-27T11:33:40Z) - Learning Robust State Abstractions for Hidden-Parameter Block MDPs [55.31018404591743]
We leverage ideas of common structure from the HiP-MDP setting to enable robust state abstractions inspired by Block MDPs.
We derive instantiations of this new framework for both multi-task reinforcement learning (MTRL) and meta-reinforcement learning (Meta-RL) settings.
arXiv Detail & Related papers (2020-07-14T17:25:27Z) - Dynamic Value Estimation for Single-Task Multi-Scene Reinforcement
Learning [22.889059874754242]
Training deep reinforcement learning agents on environments with multiple levels / scenes / conditions from the same task, has become essential for many applications.
We propose a dynamic value estimation (DVE) technique for these multiple-MDP environments, motivated by the clustering effect observed in the value function distribution across different scenes.
arXiv Detail & Related papers (2020-05-25T17:56:08Z) - Multi-Task Learning for Dense Prediction Tasks: A Survey [87.66280582034838]
Multi-task learning (MTL) techniques have shown promising results w.r.t. performance, computations and/or memory footprint.
We provide a well-rounded view on state-of-the-art deep learning approaches for MTL in computer vision.
arXiv Detail & Related papers (2020-04-28T09:15:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.