PADER: Paillier-based Secure Decentralized Social Recommendation
- URL: http://arxiv.org/abs/2601.10212v1
- Date: Thu, 15 Jan 2026 09:24:00 GMT
- Title: PADER: Paillier-based Secure Decentralized Social Recommendation
- Authors: Chaochao Chen, Jiaming Qian, Fei Zheng, Yachuan Liu,
- Abstract summary: We propose PADER: a Paillier-based secure decentralized social recommendation system.<n>The training and inference of the recommendation model are carried out securely in a decentralized manner.<n>Experiment results show that our method only takes about one second to iterate through one user with hundreds of ratings.
- Score: 11.613857573146658
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The prevalence of recommendation systems also brings privacy concerns to both the users and the sellers, as centralized platforms collect as much data as possible from them. To keep the data private, we propose PADER: a Paillier-based secure decentralized social recommendation system. In this system, the users and the sellers are nodes in a decentralized network. The training and inference of the recommendation model are carried out securely in a decentralized manner, without the involvement of a centralized platform. To this end, we apply the Paillier cryptosystem to the SoReg (Social Regularization) model, which exploits both user's ratings and social relations. We view the SoReg model as a two-party secure polynomial evaluation problem and observe that the simple bipartite computation may result in poor efficiency. To improve efficiency, we design secure addition and multiplication protocols to support secure computation on any arithmetic circuit, along with an optimal data packing scheme that is suitable for the polynomial computations of real values. Experiment results show that our method only takes about one second to iterate through one user with hundreds of ratings, and training with ~500K ratings for one epoch only takes <3 hours, which shows that the method is practical in real applications. The code is available at https://github.com/GarminQ/PADER.
Related papers
- D2M: A Decentralized, Privacy-Preserving, Incentive-Compatible Data Marketplace for Collaborative Learning [0.0]
We present prot, a decentralized data marketplace that unifies federated learning, blockchain arbitration, and economic incentives into a single framework for privacy-preserving data sharing.<n>prot achieves up to 99% accuracy on MNIST and 90% on Fashion-MNIST, with less than 3% degradation up to 30% Byzantine nodes, and 56% accuracy on CIFAR-10 despite its complexity.
arXiv Detail & Related papers (2025-12-11T07:38:05Z) - Information-Theoretic Decentralized Secure Aggregation with Collusion Resilience [95.33295072401832]
We study the problem of decentralized secure aggregation (DSA) from an information-theoretic perspective.<n>We characterize the optimal rate region, which specifies the minimum achievable communication and secret key rates for DSA.<n>Our results establish the fundamental performance limits of DSA, providing insights for the design of provably secure and communication-efficient protocols.
arXiv Detail & Related papers (2025-08-01T12:51:37Z) - DeSocial: Blockchain-based Decentralized Social Networks [32.576809043676775]
DeSocial is a decentralized social network learning framework deployed on an algorithm local development chain (Ganache)<n>DeSocial coordinates the execution and returns model-wise prediction results, enabling the user to select the most suitable backbone for personalized social prediction.<n>DeSocial uniformly selects several validation nodes that possess the algorithm specified by each user, and aggregates the prediction results by majority voting.
arXiv Detail & Related papers (2025-05-27T16:17:06Z) - Differentially private and decentralized randomized power method [15.955127242261808]
This paper proposes enhanced privacy-preserving variants of the randomized power method.<n>First, we propose a variant that reduces the amount of the noise required in current techniques to achieve Differential Privacy.<n>Second, we adapt our method to a decentralized framework in which data is distributed among multiple users.
arXiv Detail & Related papers (2024-11-04T09:53:03Z) - Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - Incentives in Private Collaborative Machine Learning [56.84263918489519]
Collaborative machine learning involves training models on data from multiple parties.
We introduce differential privacy (DP) as an incentive.
We empirically demonstrate the effectiveness and practicality of our approach on synthetic and real-world datasets.
arXiv Detail & Related papers (2024-04-02T06:28:22Z) - zkDFL: An efficient and privacy-preserving decentralized federated
learning with zero-knowledge proof [3.517233208696287]
Federated learning (FL) has been widely adopted in various fields of study and business.
Traditional centralized FL systems suffer from serious issues.
We propose a zero-knowledge proof (ZKP)-based aggregator (zkDFL)
arXiv Detail & Related papers (2023-12-01T17:00:30Z) - Incentive-Aware Recommender Systems in Two-Sided Markets [49.692453629365204]
We propose a novel recommender system that aligns with agents' incentives while achieving myopically optimal performance.
Our framework models this incentive-aware system as a multi-agent bandit problem in two-sided markets.
Both algorithms satisfy an ex-post fairness criterion, which protects agents from over-exploitation.
arXiv Detail & Related papers (2022-11-23T22:20:12Z) - PEPPER: Empowering User-Centric Recommender Systems over Gossip Learning [0.0]
PEPPER is a decentralized recommender system based on gossip learning principles.
Our solution converges up to 42% faster than with other decentralized solutions.
arXiv Detail & Related papers (2022-08-09T14:51:27Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z) - Privacy Preserving Point-of-interest Recommendation Using Decentralized
Matrix Factorization [39.47675439197051]
We present a Decentralized MF (DMF) framework for POI recommendation.
Specifically, we propose a random walk based decentralized training technique to train MF models on each user's end, e.g., cell phone and Pad.
By doing so, the ratings of each user are still kept on one's own hand, and moreover, decentralized learning can be taken as distributed learning with multi-learners.
arXiv Detail & Related papers (2020-03-12T04:08:05Z) - Byzantine-resilient Decentralized Stochastic Gradient Descent [85.15773446094576]
We present an in-depth study towards the Byzantine resilience of decentralized learning systems.
We propose UBAR, a novel algorithm to enhance decentralized learning with Byzantine Fault Tolerance.
arXiv Detail & Related papers (2020-02-20T05:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.