Privacy Preserving Point-of-interest Recommendation Using Decentralized
Matrix Factorization
- URL: http://arxiv.org/abs/2003.05610v1
- Date: Thu, 12 Mar 2020 04:08:05 GMT
- Title: Privacy Preserving Point-of-interest Recommendation Using Decentralized
Matrix Factorization
- Authors: Chaochao Chen, Ziqi Liu, Peilin Zhao, Jun Zhou, Xiaolong Li
- Abstract summary: We present a Decentralized MF (DMF) framework for POI recommendation.
Specifically, we propose a random walk based decentralized training technique to train MF models on each user's end, e.g., cell phone and Pad.
By doing so, the ratings of each user are still kept on one's own hand, and moreover, decentralized learning can be taken as distributed learning with multi-learners.
- Score: 39.47675439197051
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Points of interest (POI) recommendation has been drawn much attention
recently due to the increasing popularity of location-based networks, e.g.,
Foursquare and Yelp. Among the existing approaches to POI recommendation,
Matrix Factorization (MF) based techniques have proven to be effective.
However, existing MF approaches suffer from two major problems: (1) Expensive
computations and storages due to the centralized model training mechanism: the
centralized learners have to maintain the whole user-item rating matrix, and
potentially huge low rank matrices. (2) Privacy issues: the users' preferences
are at risk of leaking to malicious attackers via the centralized learner. To
solve these, we present a Decentralized MF (DMF) framework for POI
recommendation. Specifically, instead of maintaining all the low rank matrices
and sensitive rating data for training, we propose a random walk based
decentralized training technique to train MF models on each user's end, e.g.,
cell phone and Pad. By doing so, the ratings of each user are still kept on
one's own hand, and moreover, decentralized learning can be taken as
distributed learning with multi-learners (users), and thus alleviates the
computation and storage issue. Experimental results on two real-world datasets
demonstrate that, comparing with the classic and state-of-the-art latent factor
models, DMF significantly improvements the recommendation performance in terms
of precision and recall.
Related papers
- Bridging VLMs and Embodied Intelligence with Deliberate Practice Policy Optimization [72.20212909644017]
Deliberate Practice Policy Optimization (DPPO) is a metacognitive Metaloop'' training framework.<n>DPPO alternates between supervised fine-tuning (competence expansion) and reinforcement learning (skill refinement)<n> Empirically, training a vision-language embodied model with DPPO, referred to as Pelican-VL 1.0, yields a 20.3% performance improvement over the base model.<n>We are open-sourcing both the models and code, providing the first systematic framework that alleviates the data and resource bottleneck.
arXiv Detail & Related papers (2025-11-20T17:58:04Z) - Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - DMM: Distributed Matrix Mechanism for Differentially-Private Federated Learning using Packed Secret Sharing [51.336015600778396]
Federated Learning (FL) has gained lots of traction recently, both in industry and academia.
In FL, a machine learning model is trained using data from various end-users arranged in committees across several rounds.
Since such data can often be sensitive, a primary challenge in FL is providing privacy while still retaining utility of the model.
arXiv Detail & Related papers (2024-10-21T16:25:14Z) - Fed-Credit: Robust Federated Learning with Credibility Management [18.349127735378048]
Federated Learning (FL) is an emerging machine learning approach enabling model training on decentralized devices or data sources.
We propose a robust FL approach based on the credibility management scheme, called Fed-Credit.
The results exhibit superior accuracy and resilience against adversarial attacks, all while maintaining comparatively low computational complexity.
arXiv Detail & Related papers (2024-05-20T03:35:13Z) - Enabling Quartile-based Estimated-Mean Gradient Aggregation As Baseline
for Federated Image Classifications [5.5099914877576985]
Federated Learning (FL) has revolutionized how we train deep neural networks by enabling decentralized collaboration while safeguarding sensitive data and improving model performance.
This paper introduces an innovative solution named Estimated Mean Aggregation (EMA) that not only addresses these challenges but also provides a fundamental reference point as a $mathsfbaseline$ for advanced aggregation techniques in FL systems.
arXiv Detail & Related papers (2023-09-21T17:17:28Z) - Data augmentation and refinement for recommender system: A
semi-supervised approach using maximum margin matrix factorization [3.3525248693617207]
We explore the data augmentation and refinement aspects of Maximum Margin Matrix Factorization (MMMF) for rating predictions.
We exploit the inherent characteristics of CF algorithms to assess the confidence level of individual ratings.
We propose a semi-supervised approach for rating augmentation based on self-training.
arXiv Detail & Related papers (2023-06-22T17:17:45Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Distributed Reinforcement Learning for Privacy-Preserving Dynamic Edge
Caching [91.50631418179331]
A privacy-preserving distributed deep policy gradient (P2D3PG) is proposed to maximize the cache hit rates of devices in the MEC networks.
We convert the distributed optimizations into model-free Markov decision process problems and then introduce a privacy-preserving federated learning method for popularity prediction.
arXiv Detail & Related papers (2021-10-20T02:48:27Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z) - A High-Performance Implementation of Bayesian Matrix Factorization with
Limited Communication [10.639704288188767]
Matrix factorization algorithms can quantify uncertainty in their predictions and avoid over-fitting.
They have not been widely used on large-scale data because of their prohibitive computational cost.
We show that the state-of-the-art of both approaches to scalability can be combined.
arXiv Detail & Related papers (2020-04-06T11:16:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.