Matching in Selective and Balanced Representation Space for Treatment
Effects Estimation
- URL: http://arxiv.org/abs/2009.06828v2
- Date: Sat, 5 Jun 2021 09:12:23 GMT
- Title: Matching in Selective and Balanced Representation Space for Treatment
Effects Estimation
- Authors: Zhixuan Chu, Stephen L. Rathbun, and Sheng Li
- Abstract summary: We propose a feature selection representation matching (FSRM) method based on deep representation learning and matching.
We evaluate the performance of our FSRM method on three datasets, and the results demonstrate superiority over the state-of-the-art methods.
- Score: 10.913802831701082
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The dramatically growing availability of observational data is being
witnessed in various domains of science and technology, which facilitates the
study of causal inference. However, estimating treatment effects from
observational data is faced with two major challenges, missing counterfactual
outcomes and treatment selection bias. Matching methods are among the most
widely used and fundamental approaches to estimating treatment effects, but
existing matching methods have poor performance when facing data with high
dimensional and complicated variables. We propose a feature selection
representation matching (FSRM) method based on deep representation learning and
matching, which maps the original covariate space into a selective, nonlinear,
and balanced representation space, and then conducts matching in the learned
representation space. FSRM adopts deep feature selection to minimize the
influence of irrelevant variables for estimating treatment effects and
incorporates a regularizer based on the Wasserstein distance to learn balanced
representations. We evaluate the performance of our FSRM method on three
datasets, and the results demonstrate superiority over the state-of-the-art
methods.
Related papers
- Counterfactual Data Augmentation with Contrastive Learning [27.28511396131235]
We introduce a model-agnostic data augmentation method that imputes the counterfactual outcomes for a selected subset of individuals.
We use contrastive learning to learn a representation space and a similarity measure such that in the learned representation space close individuals identified by the learned similarity measure have similar potential outcomes.
This property ensures reliable imputation of counterfactual outcomes for the individuals with close neighbors from the alternative treatment group.
arXiv Detail & Related papers (2023-11-07T00:36:51Z) - Linking data separation, visual separation, and classifier performance
using pseudo-labeling by contrastive learning [125.99533416395765]
We argue that the performance of the final classifier depends on the data separation present in the latent space and visual separation present in the projection.
We demonstrate our results by the classification of five real-world challenging image datasets of human intestinal parasites with only 1% supervised samples.
arXiv Detail & Related papers (2023-02-06T10:01:38Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Moderately-Balanced Representation Learning for Treatment Effects with
Orthogonality Information [14.040918087553177]
Estimating the average treatment effect (ATE) from observational data is challenging due to selection bias.
We propose a moderately-balanced representation learning framework.
This framework protects the representation from being over-balanced via multi-task learning.
arXiv Detail & Related papers (2022-09-05T13:20:12Z) - Causal Inference from Small High-dimensional Datasets [7.1894784995284144]
Causal-Batle is a methodology to estimate treatment effects in small high-dimensional datasets.
We adopt an approach that brings transfer learning techniques into causal inference.
arXiv Detail & Related papers (2022-05-19T02:04:01Z) - Learning Infomax and Domain-Independent Representations for Causal
Effect Inference with Real-World Data [9.601837205635686]
We learn the Infomax and Domain-Independent Representations to solve the above puzzles.
We show that our method achieves state-of-the-art performance on causal effect inference.
arXiv Detail & Related papers (2022-02-22T13:35:15Z) - SurvITE: Learning Heterogeneous Treatment Effects from Time-to-Event
Data [83.50281440043241]
We study the problem of inferring heterogeneous treatment effects from time-to-event data.
We propose a novel deep learning method for treatment-specific hazard estimation based on balancing representations.
arXiv Detail & Related papers (2021-10-26T20:13:17Z) - CETransformer: Casual Effect Estimation via Transformer Based
Representation Learning [17.622007687796756]
Data-driven causal effect estimation faces two main challenges, i.e., selection bias and the missing of counterfactual.
To address these two issues, most of the existing approaches tend to reduce the selection bias by learning a balanced representation.
We propose a CETransformer model for casual effect estimation via transformer based representation learning.
arXiv Detail & Related papers (2021-07-19T09:39:57Z) - Loss Bounds for Approximate Influence-Based Abstraction [81.13024471616417]
Influence-based abstraction aims to gain leverage by modeling local subproblems together with the 'influence' that the rest of the system exerts on them.
This paper investigates the performance of such approaches from a theoretical perspective.
We show that neural networks trained with cross entropy are well suited to learn approximate influence representations.
arXiv Detail & Related papers (2020-11-03T15:33:10Z) - Almost-Matching-Exactly for Treatment Effect Estimation under Network
Interference [73.23326654892963]
We propose a matching method that recovers direct treatment effects from randomized experiments where units are connected in an observed network.
Our method matches units almost exactly on counts of unique subgraphs within their neighborhood graphs.
arXiv Detail & Related papers (2020-03-02T15:21:20Z) - Generalization Bounds and Representation Learning for Estimation of
Potential Outcomes and Causal Effects [61.03579766573421]
We study estimation of individual-level causal effects, such as a single patient's response to alternative medication.
We devise representation learning algorithms that minimize our bound, by regularizing the representation's induced treatment group distance.
We extend these algorithms to simultaneously learn a weighted representation to further reduce treatment group distances.
arXiv Detail & Related papers (2020-01-21T10:16:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.