Cali3F: Calibrated Fast Fair Federated Recommendation System
- URL: http://arxiv.org/abs/2205.13121v1
- Date: Thu, 26 May 2022 03:05:26 GMT
- Title: Cali3F: Calibrated Fast Fair Federated Recommendation System
- Authors: Zhitao Zhu, Shijing Si, Jianzong Wang, Jing Xiao
- Abstract summary: We propose a personalized federated recommendation system training algorithm to improve recommendation performance fairness.
We then adopt a clustering-based aggregation method to accelerate the training process.
Cali3F is a calibrated fast and fair federated recommendation framework.
- Score: 25.388324221293203
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasingly stringent regulations on privacy protection have sparked
interest in federated learning. As a distributed machine learning framework, it
bridges isolated data islands by training a global model over devices while
keeping data localized. Specific to recommendation systems, many federated
recommendation algorithms have been proposed to realize the privacy-preserving
collaborative recommendation. However, several constraints remain largely
unexplored. One big concern is how to ensure fairness between participants of
federated learning, that is, to maintain the uniformity of recommendation
performance across devices. On the other hand, due to data heterogeneity and
limited networks, additional challenges occur in the convergence speed. To
address these problems, in this paper, we first propose a personalized
federated recommendation system training algorithm to improve the
recommendation performance fairness. Then we adopt a clustering-based
aggregation method to accelerate the training process. Combining the two
components, we proposed Cali3F, a calibrated fast and fair federated
recommendation framework. Cali3F not only addresses the convergence problem by
a within-cluster parameter sharing approach but also significantly boosts
fairness by calibrating local models with the global model. We demonstrate the
performance of Cali3F across standard benchmark datasets and explore the
efficacy in comparison to traditional aggregation approaches.
Related papers
- Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs [57.35402286842029]
We propose a novel Aligned Dual Dual (A-FedPD) method, which constructs virtual dual align global and local clients.
We provide a comprehensive analysis of the A-FedPD method's efficiency for those protracted unicipated security consensus.
arXiv Detail & Related papers (2024-09-27T17:00:32Z) - Beyond Similarity: Personalized Federated Recommendation with Composite Aggregation [22.359428566363945]
Federated recommendation aims to collect global knowledge by aggregating local models from massive devices.
Current methods mainly leverage aggregation functions invented by federated vision community to aggregate parameters from similar clients.
We propose a personalized Federated recommendation model with Composite Aggregation (FedCA)
arXiv Detail & Related papers (2024-06-06T10:17:52Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - FedSkip: Combatting Statistical Heterogeneity with Federated Skip
Aggregation [95.85026305874824]
We introduce a data-driven approach called FedSkip to improve the client optima by periodically skipping federated averaging and scattering local models to the cross devices.
We conduct extensive experiments on a range of datasets to demonstrate that FedSkip achieves much higher accuracy, better aggregation efficiency and competing communication efficiency.
arXiv Detail & Related papers (2022-12-14T13:57:01Z) - Machine Unlearning of Federated Clusters [36.663892269484506]
Federated clustering is an unsupervised learning problem that arises in a number of practical applications, including personalized recommender and healthcare systems.
We introduce, for the first time, the problem of machine unlearning for FC.
We propose an efficient unlearning mechanism for a customized secure FC framework.
arXiv Detail & Related papers (2022-10-28T22:21:29Z) - FedSPLIT: One-Shot Federated Recommendation System Based on Non-negative
Joint Matrix Factorization and Knowledge Distillation [7.621960305708476]
We present the first unsupervised one-shot federated CF implementation, named FedSPLIT, based on NMF joint factorization.
FedSPLIT can obtain similar results than the state of the art (and even outperform it in certain situations) with a substantial decrease in the number of communications.
arXiv Detail & Related papers (2022-05-04T23:42:14Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Federated Self-Supervised Contrastive Learning via Ensemble Similarity
Distillation [42.05438626702343]
This paper investigates the feasibility of learning good representation space with unlabeled client data in a federated scenario.
We propose a novel self-supervised contrastive learning framework that supports architecture-agnostic local training and communication-efficient global aggregation.
arXiv Detail & Related papers (2021-09-29T02:13:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.