FedPCL-CDR: A Federated Prototype-based Contrastive Learning Framework for Privacy-Preserving Cross-domain Recommendation
- URL: http://arxiv.org/abs/2409.03294v2
- Date: Thu, 15 May 2025 14:06:05 GMT
- Title: FedPCL-CDR: A Federated Prototype-based Contrastive Learning Framework for Privacy-Preserving Cross-domain Recommendation
- Authors: Li Wang, Qiang Wu, Min Xu,
- Abstract summary: Cross-domain recommendation (CDR) aims to improve recommendation accuracy in sparse domains by transferring knowledge from data-rich domains.<n>Existing CDR approaches often assume that user-item interaction data across domains is publicly available, neglecting user privacy concerns.<n>We propose a Federated Prototype-based Contrastive Learning framework for Privacy Preserving CDR, called FedPCL-CDR.
- Score: 12.90905748020945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-domain recommendation (CDR) aims to improve recommendation accuracy in sparse domains by transferring knowledge from data-rich domains. However, existing CDR approaches often assume that user-item interaction data across domains is publicly available, neglecting user privacy concerns. Additionally, they experience performance degradation with sparse overlapping users due to their reliance on a large number of fully shared users for knowledge transfer. To address these challenges, we propose a Federated Prototype-based Contrastive Learning (CL) framework for Privacy Preserving CDR, called FedPCL-CDR. This approach utilizes non-overlapping user information and differential prototypes to improve model performance within a federated learning framework. FedPCL-CDR comprises two key modules: local domain (client) learning and global server aggregation. In the local domain, FedPCL-CDR first clusters all user data and utilizes local differential privacy (LDP) to learn differential prototypes, effectively utilizing non-overlapping user information and protecting user privacy. It then conducts knowledge transfer by employing both local and global prototypes returned from the server in a CL manner. Meanwhile, the global server aggregates differential prototypes sent from local domains to learn both local and global prototypes. Extensive experiments on four CDR tasks across Amazon and Douban datasets demonstrate that FedPCL-CDR surpasses SOTA baselines. We release our code at https://github.com/Lili1013/FedPCL CDR
Related papers
- Leave No One Behind: Fairness-Aware Cross-Domain Recommender Systems for Non-Overlapping Users [13.420661387194148]
Cross-domain recommendation (CDR) methods leverage overlapping users to transfer knowledge from a source domain to a target domain.<n>We propose a novel solution that generates virtual source-domain users for non-overlapping target-domain users.<n>Our method effectively mitigates the CDR non-overlapping user bias, without loss of overall accuracy.
arXiv Detail & Related papers (2025-07-23T17:59:08Z) - Federated Cross-Domain Click-Through Rate Prediction With Large Language Model Augmentation [4.978132660177235]
We present Federated Cross-Domain CTR Prediction with Large Language Model Augmentation (FedCCTR-LM)<n>Our approach integrates three core innovations. First, the Privacy-Preserving Augmentation Network (PrivNet) employs large language models to enrich user and item representations.<n>Second, the Independent Domain-Specific Transformer with Contrastive Learning (IDST-CL) module disentangles domain-specific and shared user preferences.<n>Third, the Adaptive Local Differential Privacy (AdaLDP) mechanism dynamically calibrates noise injection to achieve an optimal balance between rigorous privacy guarantees and predictive accuracy.
arXiv Detail & Related papers (2025-03-21T06:22:42Z) - Federated Domain Generalization via Prompt Learning and Aggregation [20.933631678895765]
Federated domain generalization (FedDG) aims to improve the global model generalization in unseen domains.
A common strategy in existing FedDG studies involves sharing domain-specific knowledge among clients.
We introduce prompt learning to adapt pre-trained vision-language models (VLMs) in the FedDG scenario.
arXiv Detail & Related papers (2024-11-15T09:26:00Z) - Federated User Preference Modeling for Privacy-Preserving Cross-Domain Recommendation [18.0700584280752]
Cross-domain recommendation (CDR) aims to address the data-sparsity problem by transferring knowledge across domains.
Recent privacy-preserving CDR models have been proposed to solve this problem.
We propose a novel Federated User Preference Modeling (FUPM) framework.
arXiv Detail & Related papers (2024-08-26T23:29:03Z) - Exploring User Retrieval Integration towards Large Language Models for Cross-Domain Sequential Recommendation [66.72195610471624]
Cross-Domain Sequential Recommendation aims to mine and transfer users' sequential preferences across different domains.
We propose a novel framework named URLLM, which aims to improve the CDSR performance by exploring the User Retrieval approach.
arXiv Detail & Related papers (2024-06-05T09:19:54Z) - A Privacy-Preserving Framework with Multi-Modal Data for Cross-Domain
Recommendation [13.33679167416221]
Cross-domain recommendation (CDR) aims to enhance recommendation accuracy in a target domain with sparse data.
We propose a Privacy-Preserving Framework with Multi-Modal Data for Cross-Domain Recommendation, called P2M2-CDR.
arXiv Detail & Related papers (2024-03-06T10:40:08Z) - FedHCDR: Federated Cross-Domain Recommendation with Hypergraph Signal Decoupling [15.159012729198619]
We propose FedHCDR, a novel Cross-Domain Recommendation framework with Hypergraph signal decoupling.
In this study, we introduce an approach called hypergraph signal decoupling (HSD) to decouple the user features into domain-exclusive and domain-shared features.
Extensive experiments conducted on three real-world scenarios demonstrate that FedHCDR outperforms existing baselines significantly.
arXiv Detail & Related papers (2024-03-05T03:40:39Z) - Mixed Attention Network for Cross-domain Sequential Recommendation [63.983590953727386]
We propose a Mixed Attention Network (MAN) with local and global attention modules to extract the domain-specific and cross-domain information.
Experimental results on two real-world datasets demonstrate the superiority of our proposed model.
arXiv Detail & Related papers (2023-11-14T16:07:16Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - A cross-domain recommender system using deep coupled autoencoders [77.86290991564829]
Two novel coupled autoencoder-based deep learning methods are proposed for cross-domain recommendation.
The first method aims to simultaneously learn a pair of autoencoders in order to reveal the intrinsic representations of the items in the source and target domains.
The second method is derived based on a new joint regularized optimization problem, which employs two autoencoders to generate in a deep and non-linear manner the user and item-latent factors.
arXiv Detail & Related papers (2021-12-08T15:14:26Z) - RecGURU: Adversarial Learning of Generalized User Representations for
Cross-Domain Recommendation [19.61356871656398]
Cross-domain recommendation can help alleviate the data sparsity issue in traditional sequential recommender systems.
We propose the RecGURU algorithm framework to generate a Generalized User Representation (GUR) incorporating user information across domains in sequential recommendation.
arXiv Detail & Related papers (2021-11-19T08:41:06Z) - MD-CSDNetwork: Multi-Domain Cross Stitched Network for Deepfake
Detection [80.83725644958633]
Current deepfake generation methods leave discriminative artifacts in the frequency spectrum of fake images and videos.
We present a novel approach, termed as MD-CSDNetwork, for combining the features in the spatial and frequency domains to mine a shared discriminative representation.
arXiv Detail & Related papers (2021-09-15T14:11:53Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Decentralised Learning from Independent Multi-Domain Labels for Person
Re-Identification [69.29602103582782]
Deep learning has been successful for many computer vision tasks due to the availability of shared and centralised large-scale training data.
However, increasing awareness of privacy concerns poses new challenges to deep learning, especially for person re-identification (Re-ID)
We propose a novel paradigm called Federated Person Re-Identification (FedReID) to construct a generalisable global model (a central server) by simultaneously learning with multiple privacy-preserved local models (local clients)
This client-server collaborative learning process is iteratively performed under privacy control, enabling FedReID to realise decentralised learning without sharing distributed data nor collecting any
arXiv Detail & Related papers (2020-06-07T13:32:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.