TEE-based decentralized recommender systems: The raw data sharing
redemption
- URL: http://arxiv.org/abs/2202.11655v1
- Date: Wed, 23 Feb 2022 17:55:39 GMT
- Title: TEE-based decentralized recommender systems: The raw data sharing
redemption
- Authors: Akash Dhasade, Nevena Dresevic, Anne-Marie Kermarrec, Rafael Pires
- Abstract summary: We present REX, the first enclave-based decentralized recommender.
REX exploits trusted execution environments to improve convergence while preserving privacy.
We analyze the impact of raw data sharing in both deep neural network (DNN) and matrix factorization (MF) recommenders.
- Score: 3.0204520109309843
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recommenders are central in many applications today. The most effective
recommendation schemes, such as those based on collaborative filtering (CF),
exploit similarities between user profiles to make recommendations, but
potentially expose private data. Federated learning and decentralized learning
systems address this by letting the data stay on user's machines to preserve
privacy: each user performs the training on local data and only the model
parameters are shared. However, sharing the model parameters across the network
may still yield privacy breaches. In this paper, we present REX, the first
enclave-based decentralized CF recommender. REX exploits Trusted execution
environments (TEE), such as Intel software guard extensions (SGX), that provide
shielded environments within the processor to improve convergence while
preserving privacy. Firstly, REX enables raw data sharing, which ultimately
speeds up convergence and reduces the network load. Secondly, REX fully
preserves privacy. We analyze the impact of raw data sharing in both deep
neural network (DNN) and matrix factorization (MF) recommenders and showcase
the benefits of trusted environments in a full-fledged implementation of REX.
Our experimental results demonstrate that through raw data sharing, REX
significantly decreases the training time by 18.3x and the network load by 2
orders of magnitude over standard decentralized approaches that share only
parameters, while fully protecting privacy by leveraging trustworthy hardware
enclaves with very little overhead.
Related papers
- PWC-MoE: Privacy-Aware Wireless Collaborative Mixture of Experts [59.5243730853157]
Large language models (LLMs) hosted on cloud servers alleviate the computational and storage burdens on local devices but raise privacy concerns.<n>Small language models (SLMs) running locally enhance privacy but suffer from limited performance on complex tasks.<n>We propose a privacy-aware wireless collaborative mixture of experts (PWC-MoE) framework to balance computational cost, performance, and privacy protection under bandwidth constraints.
arXiv Detail & Related papers (2025-05-13T16:27:07Z) - Privacy-Preserving Federated Embedding Learning for Localized Retrieval-Augmented Generation [60.81109086640437]
We propose a novel framework called Federated Retrieval-Augmented Generation (FedE4RAG)
FedE4RAG facilitates collaborative training of client-side RAG retrieval models.
We apply homomorphic encryption within federated learning to safeguard model parameters.
arXiv Detail & Related papers (2025-04-27T04:26:02Z) - FedRand: Enhancing Privacy in Federated Learning with Randomized LoRA Subparameter Updates [58.18162789618869]
Federated Learning (FL) is a widely used framework for training models in a decentralized manner.
We propose the FedRand framework, which avoids disclosing the full set of client parameters.
We empirically validate that FedRand improves robustness against MIAs compared to relevant baselines.
arXiv Detail & Related papers (2025-03-10T11:55:50Z) - Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - PDC-FRS: Privacy-preserving Data Contribution for Federated Recommender System [15.589541738576528]
Federated recommender systems (FedRecs) have emerged as a popular research direction for protecting users' privacy in on-device recommendations.
In FedRecs, users keep their data locally and only contribute their local collaborative information by uploading model parameters to a central server.
We propose a novel federated recommendation framework, PDC-FRS. Specifically, we design a privacy-preserving data contribution mechanism that allows users to share their data with a differential privacy guarantee.
arXiv Detail & Related papers (2024-09-12T06:13:07Z) - CURE: Privacy-Preserving Split Learning Done Right [1.388112207221632]
Homomorphic encryption (HE)-based solutions exist for this scenario but often impose prohibitive computational burdens.
CURE is a novel system that encrypts only the server side of the model and the data.
We demonstrate CURE can achieve similar accuracy to plaintext SL while being 16x more efficient in terms of the runtime.
arXiv Detail & Related papers (2024-07-12T04:10:19Z) - Digital Twin-Assisted Data-Driven Optimization for Reliable Edge Caching in Wireless Networks [60.54852710216738]
We introduce a novel digital twin-assisted optimization framework, called D-REC, to ensure reliable caching in nextG wireless networks.
By incorporating reliability modules into a constrained decision process, D-REC can adaptively adjust actions, rewards, and states to comply with advantageous constraints.
arXiv Detail & Related papers (2024-06-29T02:40:28Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - Benchmarking FedAvg and FedCurv for Image Classification Tasks [1.376408511310322]
This paper focuses on the problem of statistical heterogeneity of the data in the same federated network.
Several Federated Learning algorithms, such as FedAvg, FedProx and Federated Curvature (FedCurv) have already been proposed.
As a side product of this work, we release the non-IID version of the datasets we used so to facilitate further comparisons from the FL community.
arXiv Detail & Related papers (2023-03-31T10:13:01Z) - Cross-Network Social User Embedding with Hybrid Differential Privacy
Guarantees [81.6471440778355]
We propose a Cross-network Social User Embedding framework, namely DP-CroSUE, to learn the comprehensive representations of users in a privacy-preserving way.
In particular, for each heterogeneous social network, we first introduce a hybrid differential privacy notion to capture the variation of privacy expectations for heterogeneous data types.
To further enhance user embeddings, a novel cross-network GCN embedding model is designed to transfer knowledge across networks through those aligned users.
arXiv Detail & Related papers (2022-09-04T06:22:37Z) - Scotch: An Efficient Secure Computation Framework for Secure Aggregation [0.0]
Federated learning enables multiple data owners to jointly train a machine learning model without revealing their private datasets.
A malicious aggregation server might use the model parameters to derive sensitive information about the training dataset used.
We propose textscScotch, a decentralized textitm-party secure-computation framework for federated aggregation.
arXiv Detail & Related papers (2022-01-19T17:16:35Z) - FedOCR: Communication-Efficient Federated Learning for Scene Text
Recognition [76.26472513160425]
We study how to make use of decentralized datasets for training a robust scene text recognizer.
To make FedOCR fairly suitable to be deployed on end devices, we make two improvements including using lightweight models and hashing techniques.
arXiv Detail & Related papers (2020-07-22T14:30:50Z) - Concentrated Differentially Private and Utility Preserving Federated
Learning [24.239992194656164]
Federated learning is a machine learning setting where a set of edge devices collaboratively train a model under the orchestration of a central server.
In this paper, we develop a federated learning approach that addresses the privacy challenge without much degradation on model utility.
We provide a tight end-to-end privacy guarantee of our approach and analyze its theoretical convergence rates.
arXiv Detail & Related papers (2020-03-30T19:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.