Beyond Similarity: Personalized Federated Recommendation with Composite Aggregation
- URL: http://arxiv.org/abs/2406.03933v1
- Date: Thu, 6 Jun 2024 10:17:52 GMT
- Title: Beyond Similarity: Personalized Federated Recommendation with Composite Aggregation
- Authors: Honglei Zhang, Haoxuan Li, Jundong Chen, Sen Cui, Kunda Yan, Abudukelimu Wuerkaixi, Xin Zhou, Zhiqi Shen, Yidong Li,
- Abstract summary: Federated recommendation aims to collect global knowledge by aggregating local models from massive devices.
Current methods mainly leverage aggregation functions invented by federated vision community to aggregate parameters from similar clients.
We propose a personalized Federated recommendation model with Composite Aggregation (FedCA)
- Score: 22.359428566363945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated recommendation aims to collect global knowledge by aggregating local models from massive devices, to provide recommendations while ensuring privacy. Current methods mainly leverage aggregation functions invented by federated vision community to aggregate parameters from similar clients, e.g., clustering aggregation. Despite considerable performance, we argue that it is suboptimal to apply them to federated recommendation directly. This is mainly reflected in the disparate model architectures. Different from structured parameters like convolutional neural networks in federated vision, federated recommender models usually distinguish itself by employing one-to-one item embedding table. Such a discrepancy induces the challenging embedding skew issue, which continually updates the trained embeddings but ignores the non-trained ones during aggregation, thus failing to predict future items accurately. To this end, we propose a personalized Federated recommendation model with Composite Aggregation (FedCA), which not only aggregates similar clients to enhance trained embeddings, but also aggregates complementary clients to update non-trained embeddings. Besides, we formulate the overall learning process into a unified optimization algorithm to jointly learn the similarity and complementarity. Extensive experiments on several real-world datasets substantiate the effectiveness of our proposed model. The source codes are available at https://github.com/hongleizhang/FedCA.
Related papers
- Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - FedSDD: Scalable and Diversity-enhanced Distillation for Model
Aggregation in Federated Learning [15.39242780506777]
We propose a scalable and diversity-enhanced federated distillation scheme, FedSDD, for federated learning.
FedSDD decouples the training complexity from the number of clients to enhance the scalability, and builds the ensemble from a set of aggregated models.
Experiment results show that FedSDD outperforms other FL methods, including FedAvg and FedDF, on the benchmark datasets.
arXiv Detail & Related papers (2023-12-28T14:10:00Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Personalized Federated Learning with Feature Alignment and Classifier
Collaboration [13.320381377599245]
Data heterogeneity is one of the most challenging issues in federated learning.
One such approach in deep neural networks based tasks is employing a shared feature representation and learning a customized classifier head for each client.
In this work, we conduct explicit local-global feature alignment by leveraging global semantic knowledge for learning a better representation.
arXiv Detail & Related papers (2023-06-20T19:58:58Z) - FedDisco: Federated Learning with Discrepancy-Aware Collaboration [41.828780724903744]
We propose a novel aggregation method, Federated Learning with Discrepancy-aware Collaboration (FedDisco)
Our FedDisco outperforms several state-of-the-art methods and can be easily incorporated with many existing methods to further enhance the performance.
arXiv Detail & Related papers (2023-05-30T17:20:51Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - Federated Learning Aggregation: New Robust Algorithms with Guarantees [63.96013144017572]
Federated learning has been recently proposed for distributed model training at the edge.
This paper presents a complete general mathematical convergence analysis to evaluate aggregation strategies in a federated learning framework.
We derive novel aggregation algorithms which are able to modify their model architecture by differentiating client contributions according to the value of their losses.
arXiv Detail & Related papers (2022-05-22T16:37:53Z) - FedSPLIT: One-Shot Federated Recommendation System Based on Non-negative
Joint Matrix Factorization and Knowledge Distillation [7.621960305708476]
We present the first unsupervised one-shot federated CF implementation, named FedSPLIT, based on NMF joint factorization.
FedSPLIT can obtain similar results than the state of the art (and even outperform it in certain situations) with a substantial decrease in the number of communications.
arXiv Detail & Related papers (2022-05-04T23:42:14Z) - Heterogeneous Ensemble Knowledge Transfer for Training Large Models in
Federated Learning [22.310090483499035]
Federated learning (FL) enables edge-devices to collaboratively learn a model without disclosing their private data to a central aggregating server.
Most existing FL algorithms require models of identical architecture to be deployed across the clients and server.
We propose a novel ensemble knowledge transfer method named Fed-ET in which small models are trained on clients, and used to train a larger model at the server.
arXiv Detail & Related papers (2022-04-27T05:18:32Z) - On the Convergence of Clustered Federated Learning [57.934295064030636]
In a federated learning system, the clients, e.g. mobile devices and organization participants, usually have different personal preferences or behavior patterns.
This paper proposes a novel weighted client-based clustered FL algorithm to leverage the client's group and each client in a unified optimization framework.
arXiv Detail & Related papers (2022-02-13T02:39:19Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.