Federated Learning for Estimating Heterogeneous Treatment Effects
- URL: http://arxiv.org/abs/2402.17705v2
- Date: Mon, 24 Jun 2024 04:21:33 GMT
- Title: Federated Learning for Estimating Heterogeneous Treatment Effects
- Authors: Disha Makhija, Joydeep Ghosh, Yejin Kim,
- Abstract summary: Current machine learning approaches for estimating heterogeneous treatment effects (HTE) require access to substantial amounts of data per treatment.
We propose a novel framework for collaborative learning of HTE estimators across institutions via Federated Learning.
- Score: 7.967701699385625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning methods for estimating heterogeneous treatment effects (HTE) facilitate large-scale personalized decision-making across various domains such as healthcare, policy making, education, and more. Current machine learning approaches for HTE require access to substantial amounts of data per treatment, and the high costs associated with interventions makes centrally collecting so much data for each intervention a formidable challenge. To overcome this obstacle, in this work, we propose a novel framework for collaborative learning of HTE estimators across institutions via Federated Learning. We show that even under a diversity of interventions and subject populations across clients, one can jointly learn a common feature representation, while concurrently and privately learning the specific predictive functions for outcomes under distinct interventions across institutions. Our framework and the associated algorithm are based on this insight, and leverage tabular transformers to map multiple input data to feature representations which are then used for outcome prediction via multi-task learning. We also propose a novel way of federated training of personalised transformers that can work with heterogeneous input feature spaces. Experimental results on real-world clinical trial data demonstrate the effectiveness of our method.
Related papers
- Task-Agnostic Federated Learning [4.041327615026293]
This study addresses task-agnostic and generalization problem on un-seen tasks by adapting self-supervised FL framework.
utilizing Vision Transformer (ViT) as consensus feature encoder for self-supervised pre-training, no initial labels required, the framework enabling effective representation learning across diverse datasets and tasks.
arXiv Detail & Related papers (2024-06-25T02:53:37Z) - Leveraging Federated Learning for Automatic Detection of Clopidogrel
Treatment Failures [0.8132630541462695]
In this study, we leverage federated learning strategies to address clopidogrel treatment failure detection.
We partitioned the data based on geographic centers and evaluated the performance of federated learning.
Our findings underscore the potential of federated learning in addressing clopidogrel treatment failure detection.
arXiv Detail & Related papers (2024-03-05T23:31:07Z) - Multi-Task Model Personalization for Federated Supervised SVM in
Heterogeneous Networks [10.169907307499916]
Federated systems enable collaborative training on highly heterogeneous data through model personalization.
To accelerate the learning procedure for diverse participants in a multi-task federated setting, more efficient and robust methods need to be developed.
In this paper, we design an efficient iterative distributed method based on the alternating direction method of multipliers (ADMM) for support vector machines (SVMs)
The proposed method utilizes efficient computations and model exchange in a network of heterogeneous nodes and allows personalization of the learning model in the presence of non-i.i.d. data.
arXiv Detail & Related papers (2023-03-17T21:36:01Z) - Transfer Learning on Heterogeneous Feature Spaces for Treatment Effects
Estimation [103.55894890759376]
This paper introduces several building blocks that use representation learning to handle the heterogeneous feature spaces.
We show how these building blocks can be used to recover transfer learning equivalents of the standard CATE learners.
arXiv Detail & Related papers (2022-10-08T16:41:02Z) - Decentralized Distributed Learning with Privacy-Preserving Data
Synthesis [9.276097219140073]
In the medical field, multi-center collaborations are often sought to yield more generalizable findings by leveraging the heterogeneity of patient and clinical data.
Recent privacy regulations hinder the possibility to share data, and consequently, to come up with machine learning-based solutions that support diagnosis and prognosis.
We present a decentralized distributed method that integrates features from local nodes, providing models able to generalize across multiple datasets while maintaining privacy.
arXiv Detail & Related papers (2022-06-20T23:49:38Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Federated Cycling (FedCy): Semi-supervised Federated Learning of
Surgical Phases [57.90226879210227]
FedCy is a semi-supervised learning (FSSL) method that combines FL and self-supervised learning to exploit a decentralized dataset of both labeled and unlabeled videos.
We demonstrate significant performance gains over state-of-the-art FSSL methods on the task of automatic recognition of surgical phases.
arXiv Detail & Related papers (2022-03-14T17:44:53Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Task-Feature Collaborative Learning with Application to Personalized
Attribute Prediction [166.87111665908333]
We propose a novel multi-task learning method called Task-Feature Collaborative Learning (TFCL)
Specifically, we first propose a base model with a heterogeneous block-diagonal structure regularizer to leverage the collaborative grouping of features and tasks.
As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks.
arXiv Detail & Related papers (2020-04-29T02:32:04Z) - Generalization Bounds and Representation Learning for Estimation of
Potential Outcomes and Causal Effects [61.03579766573421]
We study estimation of individual-level causal effects, such as a single patient's response to alternative medication.
We devise representation learning algorithms that minimize our bound, by regularizing the representation's induced treatment group distance.
We extend these algorithms to simultaneously learn a weighted representation to further reduce treatment group distances.
arXiv Detail & Related papers (2020-01-21T10:16:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.