UniFed: A Universal Federation of a Mixture of Highly Heterogeneous Medical Image Classification Tasks
- URL: http://arxiv.org/abs/2408.07075v2
- Date: Thu, 15 Aug 2024 18:27:45 GMT
- Title: UniFed: A Universal Federation of a Mixture of Highly Heterogeneous Medical Image Classification Tasks
- Authors: Atefe Hassani, Islem Rekik,
- Abstract summary: We introduce UniFed, a universal federated learning paradigm that aims to classify any disease from any imaging modality.
Specifically, by dynamically adjusting both local and global models, UniFed considers the varying task complexities of clients and the server.
We demonstrate the superiority of our framework in terms of accuracy, communication cost, and convergence time over relevant benchmarks in diagnosing retina, histopathology, and liver tumour diseases.
- Score: 5.563171090433323
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A fundamental challenge in federated learning lies in mixing heterogeneous datasets and classification tasks while minimizing the high communication cost caused by clients as well as the exchange of weight updates with the server over a fixed number of rounds. This results in divergent model convergence rates and performance, which may hinder their deployment in precision medicine. In real-world scenarios, client data is collected from different hospitals with extremely varying components (e.g., imaging modality, organ type, etc). Previous studies often overlooked the convoluted heterogeneity during the training stage where the target learning tasks vary across clients as well as the dataset type and their distributions. To address such limitations, we unprecedentedly introduce UniFed, a universal federated learning paradigm that aims to classify any disease from any imaging modality. UniFed also handles the issue of varying convergence times in the client-specific optimization based on the complexity of their learning tasks. Specifically, by dynamically adjusting both local and global models, UniFed considers the varying task complexities of clients and the server, enhancing its adaptability to real-world scenarios, thereby mitigating issues related to overtraining and excessive communication. Furthermore, our framework incorporates a sequential model transfer mechanism that takes into account the diverse tasks among hospitals and a dynamic task-complexity based ordering. We demonstrate the superiority of our framework in terms of accuracy, communication cost, and convergence time over relevant benchmarks in diagnosing retina, histopathology, and liver tumour diseases under federated learning. Our UniFed code is available at https://github.com/basiralab/UniFed.
Related papers
- Task-Agnostic Federated Learning [4.041327615026293]
This study addresses task-agnostic and generalization problem on un-seen tasks by adapting self-supervised FL framework.
utilizing Vision Transformer (ViT) as consensus feature encoder for self-supervised pre-training, no initial labels required, the framework enabling effective representation learning across diverse datasets and tasks.
arXiv Detail & Related papers (2024-06-25T02:53:37Z) - FedHCA$^2$: Towards Hetero-Client Federated Multi-Task Learning [18.601886059536326]
Federated Learning (FL) enables joint training across distributed clients using their local data privately.
We introduce a novel problem setting, Hetero-Client Federated Multi-Task Learning (HC-FMTL), to accommodate diverse task setups.
We propose the FedHCA$2$ framework, which allows for federated training of personalized models by modeling relationships among heterogeneous clients.
arXiv Detail & Related papers (2023-11-22T09:12:50Z) - Federated Meta-Learning for Few-Shot Fault Diagnosis with Representation
Encoding [21.76802204235636]
We propose representation encoding-based federated meta-learning (REFML) for few-shot fault diagnosis.
REFML harnesses the inherent generalization among training clients, effectively transforming it into an advantage for out-of-distribution.
It achieves an increase in accuracy by 2.17%-6.50% when tested on unseen working conditions of the same equipment type and 13.44%-18.33% when tested on totally unseen equipment types.
arXiv Detail & Related papers (2023-10-13T10:48:28Z) - Neural Collapse Inspired Federated Learning with Non-iid Data [31.576588815816095]
Non-independent and identically distributed (non-iid) characteristics cause significant differences in local updates and affect the performance of the central server.
Inspired by the phenomenon of neural collapse, we force each client to be optimized toward an optimal global structure for classification.
Our method can improve the performance with faster convergence speed on different-size datasets.
arXiv Detail & Related papers (2023-03-27T05:29:53Z) - Achieving Fairness in Dermatological Disease Diagnosis through Automatic
Weight Adjusting Federated Learning and Personalization [15.276768990910337]
Dermatological diseases pose a major threat to the global health, affecting almost one-third of the world's population.
This paper proposes a fairness-aware federated learning framework for dermatological disease diagnosis.
Experiments indicate that our proposed framework effectively improves both fairness and accuracy compared with the state-of-the-art.
arXiv Detail & Related papers (2022-08-23T20:44:09Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - FedILC: Weighted Geometric Mean and Invariant Gradient Covariance for
Federated Learning on Non-IID Data [69.0785021613868]
Federated learning is a distributed machine learning approach which enables a shared server model to learn by aggregating the locally-computed parameter updates with the training data from spatially-distributed client silos.
We propose the Federated Invariant Learning Consistency (FedILC) approach, which leverages the gradient covariance and the geometric mean of Hessians to capture both inter-silo and intra-silo consistencies.
This is relevant to various fields such as medical healthcare, computer vision, and the Internet of Things (IoT)
arXiv Detail & Related papers (2022-05-19T03:32:03Z) - FedGradNorm: Personalized Federated Gradient-Normalized Multi-Task
Learning [50.756991828015316]
Multi-task learning (MTL) is a novel framework to learn several tasks simultaneously with a single shared network.
We propose FedGradNorm which uses a dynamic-weighting method to normalize norms in order to balance learning speeds among different tasks.
arXiv Detail & Related papers (2022-03-24T17:43:12Z) - FedDG: Federated Domain Generalization on Medical Image Segmentation via
Episodic Learning in Continuous Frequency Space [63.43592895652803]
Federated learning allows distributed medical institutions to collaboratively learn a shared prediction model with privacy protection.
While at clinical deployment, the models trained in federated learning can still suffer from performance drop when applied to completely unseen hospitals outside the federation.
We present a novel approach, named as Episodic Learning in Continuous Frequency Space (ELCFS), for this problem.
The effectiveness of our method is demonstrated with superior performance over state-of-the-arts and in-depth ablation experiments on two medical image segmentation tasks.
arXiv Detail & Related papers (2021-03-10T13:05:23Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Federated Continual Learning with Weighted Inter-client Transfer [79.93004004545736]
We propose a novel federated continual learning framework, Federated Weighted Inter-client Transfer (FedWeIT)
FedWeIT decomposes the network weights into global federated parameters and sparse task-specific parameters, and each client receives selective knowledge from other clients.
We validate our FedWeIT against existing federated learning and continual learning methods, and our model significantly outperforms them with a large reduction in the communication cost.
arXiv Detail & Related papers (2020-03-06T13:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.