Federated Recommendation with Additive Personalization
- URL: http://arxiv.org/abs/2301.09109v4
- Date: Thu, 8 Feb 2024 04:41:49 GMT
- Title: Federated Recommendation with Additive Personalization
- Authors: Zhiwei Li, Guodong Long, Tianyi Zhou
- Abstract summary: We propose Federated Recommendation with Additive Personalization (FedRAP)
FedRAP learns a global view of items via FL and a personalized view locally on each user.
It achieves the best performance in FL setting on multiple benchmarks.
- Score: 46.68537442234882
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building recommendation systems via federated learning (FL) is a new emerging
challenge for advancing next-generation Internet service and privacy
protection. Existing approaches train shared item embedding by FL while keeping
the user embedding private on client side. However, item embedding identical
for all clients cannot capture users' individual differences on perceiving the
same item and thus leads to poor personalization. Moreover, dense item
embedding in FL results in expensive communication cost and latency. To address
these challenges, we propose Federated Recommendation with Additive
Personalization (FedRAP), which learns a global view of items via FL and a
personalized view locally on each user. FedRAP enforces sparsity of the global
view to save FL's communication cost and encourages difference between the two
views through regularization. We propose an effective curriculum to learn the
local and global views progressively with increasing regularization weights. To
produce recommendations for an user, FedRAP adds the two views together to
obtain a personalized item embedding. FedRAP achieves the best performance in
FL setting on multiple benchmarks. It outperforms recent federated
recommendation methods and several ablation study baselines.
Related papers
- RecGPT Technical Report [57.84251629878726]
We propose RecGPT, a next-generation framework that places user intent at the center of the recommendation pipeline.<n> RecGPT integrates large language models into key stages of user interest mining, item retrieval, and explanation generation.<n>Online experiments demonstrate that RecGPT achieves consistent performance gains across stakeholders.
arXiv Detail & Related papers (2025-07-30T17:55:06Z) - FedRec+: Enhancing Privacy and Addressing Heterogeneity in Federated
Recommendation Systems [15.463595798992621]
FedRec+ is an ensemble framework for federated recommendation systems.
It enhances privacy and reduces communication costs for edge users.
Experimental results demonstrate the state-of-the-art performance of FedRec+.
arXiv Detail & Related papers (2023-10-31T05:36:53Z) - Unlocking the Potential of Prompt-Tuning in Bridging Generalized and
Personalized Federated Learning [49.72857433721424]
Vision Transformers (ViT) and Visual Prompt Tuning (VPT) achieve state-of-the-art performance with improved efficiency in various computer vision tasks.
We present a novel algorithm, SGPT, that integrates Generalized FL (GFL) and Personalized FL (PFL) approaches by employing a unique combination of both shared and group-specific prompts.
arXiv Detail & Related papers (2023-10-27T17:22:09Z) - FedJETs: Efficient Just-In-Time Personalization with Federated Mixture
of Experts [48.78037006856208]
FedJETs is a novel solution by using a Mixture-of-Experts (MoE) framework within a Federated Learning (FL) setup.
Our method leverages the diversity of the clients to train specialized experts on different subsets of classes, and a gating function to route the input to the most relevant expert(s)
Our approach can improve accuracy up to 18% in state of the art FL settings, while maintaining competitive zero-shot performance.
arXiv Detail & Related papers (2023-06-14T15:47:52Z) - User-Centric Federated Learning: Trading off Wireless Resources for
Personalization [18.38078866145659]
In Federated Learning (FL) systems, Statistical Heterogeneousness increases the algorithm convergence time and reduces the generalization performance.
To tackle the above problems without violating the privacy constraints that FL imposes, personalized FL methods have to couple statistically similar clients without directly accessing their data.
In this work, we design user-centric aggregation rules that are based on readily available gradient information and are capable of producing personalized models for each FL client.
Our algorithm outperforms popular personalized FL baselines in terms of average accuracy, worst node performance, and training communication overhead.
arXiv Detail & Related papers (2023-04-25T15:45:37Z) - Federated Learning of Shareable Bases for Personalization-Friendly Image
Classification [54.72892987840267]
FedBasis learns a set of few shareable basis'' models, which can be linearly combined to form personalized models for clients.
Specifically for a new client, only a small set of combination coefficients, not the model weights, needs to be learned.
To demonstrate the effectiveness and applicability of FedBasis, we also present a more practical PFL testbed for image classification.
arXiv Detail & Related papers (2023-04-16T20:19:18Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - PGFed: Personalize Each Client's Global Objective for Federated Learning [7.810284483002312]
We propose a novel personalized FL framework that enables each client to personalize its own global objective.
To avoid massive (O(N2)) communication overhead and potential privacy leakage, each client's risk is estimated through a first-order approximation for other clients' adaptive risk aggregation.
Our experiments on four datasets under different federated settings show consistent improvements of PGFed over previous state-of-the-art methods.
arXiv Detail & Related papers (2022-12-02T21:16:39Z) - Group Personalized Federated Learning [15.09115201646396]
Federated learning (FL) can help promote data privacy by training a shared model in a de-centralized manner on the physical devices of clients.
In this paper, we present the group personalization approach for applications of FL.
arXiv Detail & Related papers (2022-10-04T19:20:19Z) - Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM [62.62684911017472]
Federated learning (FL) enables devices to jointly train shared models while keeping the training data local for privacy purposes.
We introduce a VFL framework with multiple heads (VIM), which takes the separate contribution of each client into account.
VIM achieves significantly higher performance and faster convergence compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-20T23:14:33Z) - Privacy Assessment of Federated Learning using Private Personalized
Layers [0.9023847175654603]
Federated Learning (FL) is a collaborative scheme to train a learning model across multiple participants without sharing data.
We quantify the utility and privacy trade-off of a FL scheme using private personalized layers.
arXiv Detail & Related papers (2021-06-15T11:40:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.