Fairness and Privacy in Federated Learning and Their Implications in
Healthcare
- URL: http://arxiv.org/abs/2308.07805v1
- Date: Tue, 15 Aug 2023 14:32:16 GMT
- Title: Fairness and Privacy in Federated Learning and Their Implications in
Healthcare
- Authors: Navya Annapareddy, Jade Preston, Judy Fox
- Abstract summary: This paper endeavors to outline the typical lifecycle of fair federated learning in research as well as provide an updated taxonomy to account for the current state of fairness in implementations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Currently, many contexts exist where distributed learning is difficult or
otherwise constrained by security and communication limitations. One common
domain where this is a consideration is in Healthcare where data is often
governed by data-use-ordinances like HIPAA. On the other hand, larger sample
sizes and shared data models are necessary to allow models to better generalize
on account of the potential for more variability and balancing underrepresented
classes. Federated learning is a type of distributed learning model that allows
data to be trained in a decentralized manner. This, in turn, addresses data
security, privacy, and vulnerability considerations as data itself is not
shared across a given learning network nodes. Three main challenges to
federated learning include node data is not independent and identically
distributed (iid), clients requiring high levels of communication overhead
between peers, and there is the heterogeneity of different clients within a
network with respect to dataset bias and size. As the field has grown, the
notion of fairness in federated learning has also been introduced through novel
implementations. Fairness approaches differ from the standard form of federated
learning and also have distinct challenges and considerations for the
healthcare domain. This paper endeavors to outline the typical lifecycle of
fair federated learning in research as well as provide an updated taxonomy to
account for the current state of fairness in implementations. Lastly, this
paper provides added insight into the implications and challenges of
implementing and supporting fairness in federated learning in the healthcare
domain.
Related papers
- Generalizable Heterogeneous Federated Cross-Correlation and Instance
Similarity Learning [60.058083574671834]
This paper presents a novel FCCL+, federated correlation and similarity learning with non-target distillation.
For heterogeneous issue, we leverage irrelevant unlabeled public data for communication.
For catastrophic forgetting in local updating stage, FCCL+ introduces Federated Non Target Distillation.
arXiv Detail & Related papers (2023-09-28T09:32:27Z) - Benchmarking FedAvg and FedCurv for Image Classification Tasks [1.376408511310322]
This paper focuses on the problem of statistical heterogeneity of the data in the same federated network.
Several Federated Learning algorithms, such as FedAvg, FedProx and Federated Curvature (FedCurv) have already been proposed.
As a side product of this work, we release the non-IID version of the datasets we used so to facilitate further comparisons from the FL community.
arXiv Detail & Related papers (2023-03-31T10:13:01Z) - A Survey on Class Imbalance in Federated Learning [6.632451878730774]
Federated learning allows multiple client devices in a network to jointly train a machine learning model without direct exposure of clients' data.
It has been found that models trained with federated learning usually have worse performance than their counterparts trained in the standard centralized learning mode.
arXiv Detail & Related papers (2023-03-21T08:34:23Z) - FedILC: Weighted Geometric Mean and Invariant Gradient Covariance for
Federated Learning on Non-IID Data [69.0785021613868]
Federated learning is a distributed machine learning approach which enables a shared server model to learn by aggregating the locally-computed parameter updates with the training data from spatially-distributed client silos.
We propose the Federated Invariant Learning Consistency (FedILC) approach, which leverages the gradient covariance and the geometric mean of Hessians to capture both inter-silo and intra-silo consistencies.
This is relevant to various fields such as medical healthcare, computer vision, and the Internet of Things (IoT)
arXiv Detail & Related papers (2022-05-19T03:32:03Z) - When Accuracy Meets Privacy: Two-Stage Federated Transfer Learning
Framework in Classification of Medical Images on Limited Data: A COVID-19
Case Study [77.34726150561087]
COVID-19 pandemic has spread rapidly and caused a shortage of global medical resources.
CNN has been widely utilized and verified in analyzing medical images.
arXiv Detail & Related papers (2022-03-24T02:09:41Z) - Practical Challenges in Differentially-Private Federated Survival
Analysis of Medical Data [57.19441629270029]
In this paper, we take advantage of the inherent properties of neural networks to federate the process of training of survival analysis models.
In the realistic setting of small medical datasets and only a few data centers, this noise makes it harder for the models to converge.
We propose DPFed-post which adds a post-processing stage to the private federated learning scheme.
arXiv Detail & Related papers (2022-02-08T10:03:24Z) - FairFed: Enabling Group Fairness in Federated Learning [22.913999279079878]
Federated learning has been viewed as a promising solution for learning machine learning models among multiple parties.
We propose FairFed, a novel algorithm to enhance group fairness via a fairness-aware aggregation method.
Our proposed method outperforms the state-of-the-art fair federated learning frameworks under a high heterogeneous sensitive attribute distribution.
arXiv Detail & Related papers (2021-10-02T17:55:20Z) - FedDG: Federated Domain Generalization on Medical Image Segmentation via
Episodic Learning in Continuous Frequency Space [63.43592895652803]
Federated learning allows distributed medical institutions to collaboratively learn a shared prediction model with privacy protection.
While at clinical deployment, the models trained in federated learning can still suffer from performance drop when applied to completely unseen hospitals outside the federation.
We present a novel approach, named as Episodic Learning in Continuous Frequency Space (ELCFS), for this problem.
The effectiveness of our method is demonstrated with superior performance over state-of-the-arts and in-depth ablation experiments on two medical image segmentation tasks.
arXiv Detail & Related papers (2021-03-10T13:05:23Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.