Personalized Federated Collaborative Filtering: A Variational AutoEncoder Approach
- URL: http://arxiv.org/abs/2408.08931v1
- Date: Fri, 16 Aug 2024 05:49:14 GMT
- Title: Personalized Federated Collaborative Filtering: A Variational AutoEncoder Approach
- Authors: Zhiwei Li, Guodong Long, Tianyi Zhou, Jing Jiang, Chengqi Zhang,
- Abstract summary: Federated Collaborative Filtering (FedCF) is an emerging field focused on developing a new recommendation framework with preserving privacy.
This paper proposes a novel personalized FedCF method by preserving users' personalized information into a latent variable and a neural model simultaneously.
To effectively train the proposed framework, we model the problem as a specialized Variational AutoEncoder (VAE) task by integrating user interaction vector reconstruction with missing value prediction.
- Score: 49.63614966954833
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Collaborative Filtering (FedCF) is an emerging field focused on developing a new recommendation framework with preserving privacy in a federated setting. Existing FedCF methods typically combine distributed Collaborative Filtering (CF) algorithms with privacy-preserving mechanisms, and then preserve personalized information into a user embedding vector. However, the user embedding is usually insufficient to preserve the rich information of the fine-grained personalization across heterogeneous clients. This paper proposes a novel personalized FedCF method by preserving users' personalized information into a latent variable and a neural model simultaneously. Specifically, we decompose the modeling of user knowledge into two encoders, each designed to capture shared knowledge and personalized knowledge separately. A personalized gating network is then applied to balance personalization and generalization between the global and local encoders. Moreover, to effectively train the proposed framework, we model the CF problem as a specialized Variational AutoEncoder (VAE) task by integrating user interaction vector reconstruction with missing value prediction. The decoder is trained to reconstruct the implicit feedback from items the user has interacted with, while also predicting items the user might be interested in but has not yet interacted with. Experimental results on benchmark datasets demonstrate that the proposed method outperforms other baseline methods, showcasing superior performance.
Related papers
- Co-clustering for Federated Recommender System [33.70723179405055]
Federated Recommender System (FRS) offers a solution that strikes a balance between providing high-quality recommendations and preserving user privacy.
The presence of statistical heterogeneity in FRS, commonly observed due to personalized decision-making patterns, can pose challenges.
We propose CoFedRec, a novel Co-clustering Federated Recommendation mechanism.
arXiv Detail & Related papers (2024-11-03T21:32:07Z) - On-Device Collaborative Language Modeling via a Mixture of Generalists and Specialists [33.68104398807581]
We propose a novel $textbfCo$llaborative learning approach with a $textbfMi$xture of $textbfG$eneralists and $textbfS$pecialists (CoMiGS)
Our approach distinguishes generalists and specialists by aggregating certain experts across end users while keeping others localized to specialize in user-specific datasets.
arXiv Detail & Related papers (2024-09-20T22:34:37Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Personalized Federated Learning via Sequential Layer Expansion in Representation Learning [0.0]
Federated learning ensures the privacy of clients by conducting distributed training on individual client devices and sharing only the model weights with a central server.
We propose a new representation learning-based approach that suggests decoupling the entire deep learning model into more densely divided parts with the application of suitable scheduling methods.
arXiv Detail & Related papers (2024-04-27T06:37:19Z) - FedJETs: Efficient Just-In-Time Personalization with Federated Mixture
of Experts [48.78037006856208]
FedJETs is a novel solution by using a Mixture-of-Experts (MoE) framework within a Federated Learning (FL) setup.
Our method leverages the diversity of the clients to train specialized experts on different subsets of classes, and a gating function to route the input to the most relevant expert(s)
Our approach can improve accuracy up to 18% in state of the art FL settings, while maintaining competitive zero-shot performance.
arXiv Detail & Related papers (2023-06-14T15:47:52Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Dual Personalization on Federated Recommendation [50.4115315992418]
Federated recommendation is a new Internet service architecture that aims to provide privacy-preserving recommendation services in federated settings.
This paper proposes a novel Personalized Federated Recommendation (PFedRec) framework to learn many user-specific lightweight models.
We also propose a new dual personalization mechanism to effectively learn fine-grained personalization on both users and items.
arXiv Detail & Related papers (2023-01-16T05:26:07Z) - FedSPLIT: One-Shot Federated Recommendation System Based on Non-negative
Joint Matrix Factorization and Knowledge Distillation [7.621960305708476]
We present the first unsupervised one-shot federated CF implementation, named FedSPLIT, based on NMF joint factorization.
FedSPLIT can obtain similar results than the state of the art (and even outperform it in certain situations) with a substantial decrease in the number of communications.
arXiv Detail & Related papers (2022-05-04T23:42:14Z) - PFA: Privacy-preserving Federated Adaptation for Effective Model
Personalization [6.66389628571674]
Federated learning (FL) has become a prevalent distributed machine learning paradigm with improved privacy.
This paper introduces a new concept called federated adaptation, targeting at adapting the trained model in a federated manner to achieve better personalization results.
We propose PFA, a framework to accomplish Privacy-preserving Federated Adaptation.
arXiv Detail & Related papers (2021-03-02T08:07:34Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.