Mitigating Group-Level Fairness Disparities in Federated Visual Language Models
- URL: http://arxiv.org/abs/2505.01851v1
- Date: Sat, 03 May 2025 16:09:52 GMT
- Title: Mitigating Group-Level Fairness Disparities in Federated Visual Language Models
- Authors: Chaomeng Chen, Zitong Yu, Junhao Dong, Sen Su, Linlin Shen, Shutao Xia, Xiaochun Cao,
- Abstract summary: This paper introduces FVL-FP, a novel framework that combines FL with fair prompt tuning techniques.<n>We focus on mitigating demographic biases while preserving model performance.<n>Our approach reduces demographic disparity by an average of 45% compared to standard FL approaches.
- Score: 115.16940773660104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual language models (VLMs) have shown remarkable capabilities in multimodal tasks but face challenges in maintaining fairness across demographic groups, particularly when deployed in federated learning (FL) environments. This paper addresses the critical issue of group fairness in federated VLMs by introducing FVL-FP, a novel framework that combines FL with fair prompt tuning techniques. We focus on mitigating demographic biases while preserving model performance through three innovative components: (1) Cross-Layer Demographic Fair Prompting (CDFP), which adjusts potentially biased embeddings through counterfactual regularization; (2) Demographic Subspace Orthogonal Projection (DSOP), which removes demographic bias in image representations by mapping fair prompt text to group subspaces; and (3) Fair-aware Prompt Fusion (FPF), which dynamically balances client contributions based on both performance and fairness metrics. Extensive evaluations across four benchmark datasets demonstrate that our approach reduces demographic disparity by an average of 45\% compared to standard FL approaches, while maintaining task performance within 6\% of state-of-the-art results. FVL-FP effectively addresses the challenges of non-IID data distributions in federated settings and introduces minimal computational overhead while providing significant fairness benefits. Our work presents a parameter-efficient solution to the critical challenge of ensuring equitable performance across demographic groups in privacy-preserving multimodal systems.
Related papers
- pFedFair: Towards Optimal Group Fairness-Accuracy Trade-off in Heterogeneous Federated Learning [17.879602968559198]
Federated learning algorithms aim to maximize clients' accuracy by training a model on their collective data.<n>Group fairness constraints can be incorporated into the objective function of the FL optimization problem.<n>We show that such an approach would lead to suboptimal classification accuracy in an FL setting with heterogeneous client distributions.
arXiv Detail & Related papers (2025-03-19T06:15:31Z) - A Post-Processing-Based Fair Federated Learning Framework [8.978878439498365]
Federated Learning (FL) allows collaborative model training among distributed parties without pooling local datasets at a central server.<n>We define and empirically analyze a simple and intuitive post-processing-based framework to improve group fairness in FL systems.<n>Our work shows that this framework not only simplifies fairness implementation in FL but also provides significant fairness improvements with minimal accuracy loss or even accuracy gain.
arXiv Detail & Related papers (2025-01-25T20:05:27Z) - Over-the-Air Fair Federated Learning via Multi-Objective Optimization [52.295563400314094]
We propose an over-the-air fair federated learning algorithm (OTA-FFL) to train fair FL models.<n>Experiments demonstrate the superiority of OTA-FFL in achieving fairness and robust performance.
arXiv Detail & Related papers (2025-01-06T21:16:51Z) - Client Contribution Normalization for Enhanced Federated Learning [4.726250115737579]
Mobile devices, including smartphones and laptops, generate decentralized and heterogeneous data.
Federated Learning (FL) offers a promising alternative by enabling collaborative training of a global model across decentralized devices without data sharing.
This paper focuses on data-dependent heterogeneity in FL and proposes a novel approach leveraging mean latent representations extracted from locally trained models.
arXiv Detail & Related papers (2024-11-10T04:03:09Z) - Multi-dimensional Fair Federated Learning [25.07463977553212]
Federated learning (FL) has emerged as a promising collaborative and secure paradigm for training a model from decentralized data.
Group fairness and client fairness are two dimensions of fairness that are important for FL.
We propose a method, called mFairFL, to achieve group fairness and client fairness simultaneously.
arXiv Detail & Related papers (2023-12-09T11:37:30Z) - EASTER: Embedding Aggregation-based Heterogeneous Models Training in Vertical Federated Learning [50.30581967024277]
This paper proposes a novel approach called Vertical federated learning for training multiple Heterogeneous models (VFedMH)<n>To protect the participants' local embedding values, we propose an embedding protection method based on lightweight blinding factors.<n>Experiments are conducted to demonstrate that VFedMH can simultaneously train multiple heterogeneous models with heterogeneous optimization and outperform some recent methods in model performance.
arXiv Detail & Related papers (2023-10-20T09:22:51Z) - Fairness-aware Federated Minimax Optimization with Convergence Guarantee [10.727328530242461]
Federated learning (FL) has garnered considerable attention due to its privacy-preserving feature.
The lack of freedom in managing user data can lead to group fairness issues, where models are biased towards sensitive factors such as race or gender.
This paper proposes a novel algorithm, fair federated averaging with augmented Lagrangian method (FFALM), designed explicitly to address group fairness issues in FL.
arXiv Detail & Related papers (2023-07-10T08:45:58Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - Heterogeneous Federated Learning via Grouped Sequential-to-Parallel
Training [60.892342868936865]
Federated learning (FL) is a rapidly growing privacy-preserving collaborative machine learning paradigm.
We propose a data heterogeneous-robust FL approach, FedGSP, to address this challenge.
We show that FedGSP improves the accuracy by 3.7% on average compared with seven state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-31T03:15:28Z) - Feature Quantization Improves GAN Training [126.02828112121874]
Feature Quantization (FQ) for the discriminator embeds both true and fake data samples into a shared discrete space.
Our method can be easily plugged into existing GAN models, with little computational overhead in training.
arXiv Detail & Related papers (2020-04-05T04:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.