Fine-Tuning Foundation Models with Federated Learning for Privacy Preserving Medical Time Series Forecasting
- URL: http://arxiv.org/abs/2502.09744v1
- Date: Thu, 13 Feb 2025 20:01:15 GMT
- Title: Fine-Tuning Foundation Models with Federated Learning for Privacy Preserving Medical Time Series Forecasting
- Authors: Mahad Ali, Curtis Lisle, Patrick W. Moore, Tammer Barkouki, Brian J. Kirkwood, Laura J. Brattain,
- Abstract summary: Federated Learning (FL) provides a decentralized machine learning approach, where multiple devices or servers collaboratively train a model without sharing their raw data.
In this paper, we fine-tune time series FMs with Electrocardiogram (ECG) and Impedance Cardiography (ICG) data using different FL techniques.
Our empirical results demonstrated that while FL can be effective for fine-tuning FMs on time series forecasting tasks, its benefits depend on the data distribution across clients.
- Score: 0.32985979395737786
- License:
- Abstract: Federated Learning (FL) provides a decentralized machine learning approach, where multiple devices or servers collaboratively train a model without sharing their raw data, thus enabling data privacy. This approach has gained significant interest in academia and industry due to its privacy-preserving properties, which are particularly valuable in the medical domain where data availability is often protected under strict regulations. A relatively unexplored area is the use of FL to fine-tune Foundation Models (FMs) for time series forecasting, potentially enhancing model efficacy by overcoming data limitation while maintaining privacy. In this paper, we fine-tuned time series FMs with Electrocardiogram (ECG) and Impedance Cardiography (ICG) data using different FL techniques. We then examined various scenarios and discussed the challenges FL faces under different data heterogeneity configurations. Our empirical results demonstrated that while FL can be effective for fine-tuning FMs on time series forecasting tasks, its benefits depend on the data distribution across clients. We highlighted the trade-offs in applying FL to FM fine-tuning.
Related papers
- Synergizing Foundation Models and Federated Learning: A Survey [23.416321895575507]
This paper discusses the potentials and challenges of synergizing Federated Learning (FL) and Foundation Models (FM)
FL is a collaborative learning paradigm that breaks the barrier of data availability from different participants.
It provides a promising solution to customize and adapt FMs to a wide range of domain-specific tasks using distributed datasets whilst preserving privacy.
arXiv Detail & Related papers (2024-06-18T17:58:09Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Federated Learning with Diffusion Models for Privacy-Sensitive Vision
Tasks [27.780010580879377]
Diffusion models have great potential for vision-related tasks, particularly for image generation.
However, their training is typically conducted in a centralized manner, relying on data collected from publicly available sources.
This approach may not be feasible or practical in many domains, such as the medical field, which involves privacy concerns over data collection.
arXiv Detail & Related papers (2023-11-28T06:08:16Z) - Federated Learning with Privacy-Preserving Ensemble Attention
Distillation [63.39442596910485]
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
arXiv Detail & Related papers (2022-10-16T06:44:46Z) - FedDAR: Federated Domain-Aware Representation Learning [14.174833360938806]
Cross-silo Federated learning (FL) has become a promising tool in machine learning applications for healthcare.
We propose a novel method, FedDAR, which learns a domain shared representation and domain-wise personalized prediction heads.
arXiv Detail & Related papers (2022-09-08T19:18:59Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Differentially private federated deep learning for multi-site medical
image segmentation [56.30543374146002]
Collaborative machine learning techniques such as federated learning (FL) enable the training of models on effectively larger datasets without data transfer.
Recent initiatives have demonstrated that segmentation models trained with FL can achieve performance similar to locally trained models.
However, FL is not a fully privacy-preserving technique and privacy-centred attacks can disclose confidential patient data.
arXiv Detail & Related papers (2021-07-06T12:57:32Z) - Learning summary features of time series for likelihood free inference [93.08098361687722]
We present a data-driven strategy for automatically learning summary features from time series data.
Our results indicate that learning summary features from data can compete and even outperform LFI methods based on hand-crafted values.
arXiv Detail & Related papers (2020-12-04T19:21:37Z) - Reliability and Performance Assessment of Federated Learning on Clinical
Benchmark Data [0.0]
Federated learning (FL) has been suggested to protect personal privacy because it does not centralize data during the training phase.
In this study, we assessed the reliability and performance of FL on benchmark datasets including MNIST and MIMIC-III.
arXiv Detail & Related papers (2020-05-24T14:36:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.