CoRe-Fed: Bridging Collaborative and Representation Fairness via Federated Embedding Distillation
- URL: http://arxiv.org/abs/2602.00647v1
- Date: Sat, 31 Jan 2026 10:41:00 GMT
- Title: CoRe-Fed: Bridging Collaborative and Representation Fairness via Federated Embedding Distillation
- Authors: Noorain Mukhtiar, Adnan Mahmood, Quan Z. Sheng,
- Abstract summary: Federated Learning (FL) has emerged as a key approach to enable collaborative intelligence through decentralized model training.<n>We propose CoRe-Fed, a unified optimization framework that bridges collaborative and representation fairness.<n>We show that CoRe-Fed improves both fairness and model performance over the state-of-the-art baseline algorithms.
- Score: 12.707158627881968
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the proliferation of distributed data sources, Federated Learning (FL) has emerged as a key approach to enable collaborative intelligence through decentralized model training while preserving data privacy. However, conventional FL algorithms often suffer from performance disparities across clients caused by heterogeneous data distributions and unequal participation, which leads to unfair outcomes. Specifically, we focus on two core fairness challenges, i.e., representation bias, arising from misaligned client representations, and collaborative bias, stemming from inequitable contribution during aggregation, both of which degrade model performance and generalizability. To mitigate these disparities, we propose CoRe-Fed, a unified optimization framework that bridges collaborative and representation fairness via embedding-level regularization and fairness-aware aggregation. Initially, an alignment-driven mechanism promotes semantic consistency between local and global embeddings to reduce representational divergence. Subsequently, a dynamic reward-penalty-based aggregation strategy adjusts each client's weight based on participation history and embedding alignment to ensure contribution-aware aggregation. Extensive experiments across diverse models and datasets demonstrate that CoRe-Fed improves both fairness and model performance over the state-of-the-art baseline algorithms.
Related papers
- Local Performance vs. Out-of-Distribution Generalization: An Empirical Analysis of Personalized Federated Learning in Heterogeneous Data Environments [3.186130813218338]
This study involves a thorough evaluation of Federated Learning approaches, encompassing both their local performance and their generalization capabilities.<n>We propose and incorporate a modified approach of FedAvg, designated as Federated Learning with Individualized Updates (FLIU), extending the algorithm by a straightforward individualization step with an adaptive personalization factor.
arXiv Detail & Related papers (2025-10-28T15:15:14Z) - Resource-Aware Aggregation and Sparsification in Heterogeneous Ensemble Federated Learning [0.9176056742068811]
Federated learning (FL) enables distributed training with private client data.<n>Current ensemble-based FL methods fall short in capturing diversity of model predictions.<n>We propose textbfSHEFL, a global ensemble-based FL framework suited for clients with diverse computational capacities.
arXiv Detail & Related papers (2025-08-12T01:40:46Z) - Mitigating Group-Level Fairness Disparities in Federated Visual Language Models [115.16940773660104]
This paper introduces FVL-FP, a novel framework that combines FL with fair prompt tuning techniques.<n>We focus on mitigating demographic biases while preserving model performance.<n>Our approach reduces demographic disparity by an average of 45% compared to standard FL approaches.
arXiv Detail & Related papers (2025-05-03T16:09:52Z) - Interaction-Aware Gaussian Weighting for Clustered Federated Learning [58.92159838586751]
Federated Learning (FL) emerged as a decentralized paradigm to train models while preserving privacy.<n>We propose a novel clustered FL method, FedGWC (Federated Gaussian Weighting Clustering), which groups clients based on their data distribution.<n>Our experiments on benchmark datasets show that FedGWC outperforms existing FL algorithms in cluster quality and classification accuracy.
arXiv Detail & Related papers (2025-02-05T16:33:36Z) - FedDUAL: A Dual-Strategy with Adaptive Loss and Dynamic Aggregation for Mitigating Data Heterogeneity in Federated Learning [12.307490659840845]
Federated Learning (FL) combines locally optimized models from various clients into a unified global model.<n>FL encounters significant challenges such as performance degradation, slower convergence, and reduced robustness of the global model.<n>We introduce an innovative dual-strategy approach designed to effectively resolve these issues.
arXiv Detail & Related papers (2024-12-05T18:42:29Z) - Client Contribution Normalization for Enhanced Federated Learning [4.726250115737579]
Mobile devices, including smartphones and laptops, generate decentralized and heterogeneous data.
Federated Learning (FL) offers a promising alternative by enabling collaborative training of a global model across decentralized devices without data sharing.
This paper focuses on data-dependent heterogeneity in FL and proposes a novel approach leveraging mean latent representations extracted from locally trained models.
arXiv Detail & Related papers (2024-11-10T04:03:09Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Aggregation Weighting of Federated Learning via Generalization Bound
Estimation [65.8630966842025]
Federated Learning (FL) typically aggregates client model parameters using a weighting approach determined by sample proportions.
We replace the aforementioned weighting method with a new strategy that considers the generalization bounds of each local model.
arXiv Detail & Related papers (2023-11-10T08:50:28Z) - Generalizable Heterogeneous Federated Cross-Correlation and Instance
Similarity Learning [60.058083574671834]
This paper presents a novel FCCL+, federated correlation and similarity learning with non-target distillation.
For heterogeneous issue, we leverage irrelevant unlabeled public data for communication.
For catastrophic forgetting in local updating stage, FCCL+ introduces Federated Non Target Distillation.
arXiv Detail & Related papers (2023-09-28T09:32:27Z) - Entropy-driven Fair and Effective Federated Learning [26.22014904183881]
Federated Learning (FL) enables collaborative model training across distributed devices while preserving data privacy.<n>We propose a novel that leverages Theoretical-based aggregation combined with model and gradient alignments to simultaneously optimize and global model performance.
arXiv Detail & Related papers (2023-01-29T10:02:42Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Collaborative Fairness in Federated Learning [24.7378023761443]
We propose a novel Collaborative Fair Federated Learning (CFFL) framework for deep learning.
CFFL enforces participants to converge to different models, thus achieving fairness without compromising predictive performance.
Experiments on benchmark datasets demonstrate that CFFL achieves high fairness and delivers comparable accuracy to the Distributed framework.
arXiv Detail & Related papers (2020-08-27T14:39:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.