Client-wise Modality Selection for Balanced Multi-modal Federated
Learning
- URL: http://arxiv.org/abs/2401.00403v1
- Date: Sun, 31 Dec 2023 05:37:27 GMT
- Title: Client-wise Modality Selection for Balanced Multi-modal Federated
Learning
- Authors: Yunfeng Fan, Wenchao Xu, Haozhao Wang, Penghui Ruan and Song Guo
- Abstract summary: Existing client selection methods simply consider the variability among FL clients with uni-modal data.
Traditional client selection scheme in MFL may suffer from a severe modality-level bias, which impedes the collaborative exploitation of multi-modal data.
We propose a Client-wise Modality Selection scheme for MFL (CMSFed) that can comprehensively utilize information from each modality.
- Score: 18.390448116936753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Selecting proper clients to participate in the iterative federated learning
(FL) rounds is critical to effectively harness a broad range of distributed
datasets. Existing client selection methods simply consider the variability
among FL clients with uni-modal data, however, have yet to consider clients
with multi-modalities. We reveal that traditional client selection scheme in
MFL may suffer from a severe modality-level bias, which impedes the
collaborative exploitation of multi-modal data, leading to insufficient local
data exploration and global aggregation. To tackle this challenge, we propose a
Client-wise Modality Selection scheme for MFL (CMSFed) that can comprehensively
utilize information from each modality via avoiding such client selection bias
caused by modality imbalance. Specifically, in each MFL round, the local data
from different modalities are selectively employed to participate in local
training and aggregation to mitigate potential modality imbalance of the global
model. To approximate the fully aggregated model update in a balanced way, we
introduce a novel local training loss function to enhance the weak modality and
align the divergent feature spaces caused by inconsistent modality adoption
strategies for different clients simultaneously. Then, a modality-level
gradient decoupling method is designed to derive respective submodular
functions to maintain the gradient diversity during the selection progress and
balance MFL according to local modality imbalance in each iteration. Our
extensive experiments showcase the superiority of CMSFed over baselines and its
effectiveness in multi-modal data exploitation.
Related papers
- SFedCA: Credit Assignment-Based Active Client Selection Strategy for Spiking Federated Learning [15.256986486372407]
Spiking federated learning allows resource-constrained devices to train collaboratively at low power consumption without exchanging local data.
Existing spiking federated learning methods employ a random selection approach for client aggregation, assuming unbiased client participation.
We propose a credit assignment-based active client selection strategy, the SFedCA, to judiciously aggregate clients that contribute to the global sample distribution balance.
arXiv Detail & Related papers (2024-06-18T01:56:22Z) - FLASH: Federated Learning Across Simultaneous Heterogeneities [54.80435317208111]
FLASH(Federated Learning Across Simultaneous Heterogeneities) is a lightweight and flexible client selection algorithm.
It outperforms state-of-the-art FL frameworks under extensive sources of Heterogeneities.
It achieves substantial and consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-13T20:04:39Z) - Communication-Efficient Multimodal Federated Learning: Joint Modality
and Client Selection [14.261582708240407]
Multimodal Federated learning (FL) aims to enrich model training in FL settings where clients are collecting measurements across multiple modalities.
Key challenges to multimodal FL remain unaddressed, particularly in heterogeneous network settings.
We propose mmFedMC, a new FL methodology that can tackle the above-mentioned challenges in multimodal settings.
arXiv Detail & Related papers (2024-01-30T02:16:19Z) - Cross-Modal Prototype based Multimodal Federated Learning under Severely
Missing Modality [31.727012729846333]
Multimodal Federated Cross Prototype Learning (MFCPL) is a novel approach for MFL under severely missing modalities.
MFCPL provides diverse modality knowledge in modality-shared level with the cross-modal regularization and modality-specific level with cross-modal contrastive mechanism.
Our approach introduces the cross-modal alignment to provide regularization for modality-specific features, thereby enhancing overall performance.
arXiv Detail & Related papers (2024-01-25T02:25:23Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Multimodal Federated Learning via Contrastive Representation Ensemble [17.08211358391482]
Federated learning (FL) serves as a privacy-conscious alternative to centralized machine learning.
Existing FL methods all rely on model aggregation on single modality level.
We propose Contrastive Representation Ensemble and Aggregation for Multimodal FL (CreamFL)
arXiv Detail & Related papers (2023-02-17T14:17:44Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - FedDC: Federated Learning with Non-IID Data via Local Drift Decoupling
and Correction [48.85303253333453]
Federated learning (FL) allows multiple clients to collectively train a high-performance global model without sharing their private data.
We propose a novel federated learning algorithm with local drift decoupling and correction (FedDC)
Our FedDC only introduces lightweight modifications in the local training phase, in which each client utilizes an auxiliary local drift variable to track the gap between the local model parameter and the global model parameters.
Experiment results and analysis demonstrate that FedDC yields expediting convergence and better performance on various image classification tasks.
arXiv Detail & Related papers (2022-03-22T14:06:26Z) - Federated Multi-Task Learning under a Mixture of Distributions [10.00087964926414]
Federated Learning (FL) is a framework for on-device collaborative training of machine learning models.
First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client.
We study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions.
arXiv Detail & Related papers (2021-08-23T15:47:53Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.