Privacy-preserving Non-negative Matrix Factorization with Outliers
- URL: http://arxiv.org/abs/2211.01451v1
- Date: Wed, 2 Nov 2022 19:42:18 GMT
- Title: Privacy-preserving Non-negative Matrix Factorization with Outliers
- Authors: Swapnil Saha and Hafiz Imtiaz
- Abstract summary: We focus on developing a Non-negative matrix factorization algorithm in the privacy-preserving framework.
We propose a novel privacy-preserving algorithm for non-negative matrix factorisation capable of operating on private data.
We show our proposed framework's performance in six real data sets.
- Score: 2.84279467589473
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-negative matrix factorization is a popular unsupervised machine learning
algorithm for extracting meaningful features from data which are inherently
non-negative. However, such data sets may often contain privacy-sensitive user
data, and therefore, we may need to take necessary steps to ensure the privacy
of the users while analyzing the data. In this work, we focus on developing a
Non-negative matrix factorization algorithm in the privacy-preserving
framework. More specifically, we propose a novel privacy-preserving algorithm
for non-negative matrix factorisation capable of operating on private data,
while achieving results comparable to those of the non-private algorithm. We
design the framework such that one has the control to select the degree of
privacy grantee based on the utility gap. We show our proposed framework's
performance in six real data sets. The experimental results show that our
proposed method can achieve very close performance with the non-private
algorithm under some parameter regime, while ensuring strict privacy.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - CURATE: Scaling-up Differentially Private Causal Graph Discovery [8.471466670802817]
Differential Privacy (DP) has been adopted to ensure user privacy in Causal Graph Discovery (CGD)
We present CURATE, a DP-CGD framework with adaptive privacy budgeting.
We show that CURATE achieves higher utility compared to existing DP-CGD algorithms with less privacy-leakage.
arXiv Detail & Related papers (2024-09-27T18:00:38Z) - Private Optimal Inventory Policy Learning for Feature-based Newsvendor with Unknown Demand [13.594765018457904]
This paper introduces a novel approach to estimate a privacy-preserving optimal inventory policy within the f-differential privacy framework.
We develop a clipped noisy gradient descent algorithm based on convolution smoothing for optimal inventory estimation.
Our numerical experiments demonstrate that the proposed new method can achieve desirable privacy protection with a marginal increase in cost.
arXiv Detail & Related papers (2024-04-23T19:15:43Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Privacy-Preserving Matrix Factorization for Recommendation Systems using
Gaussian Mechanism [2.84279467589473]
We propose a privacy-preserving recommendation system based on the differential privacy framework and matrix factorization.
As differential privacy is a powerful and robust mathematical framework for designing privacy-preserving machine learning algorithms, it is possible to prevent adversaries from extracting sensitive user information.
arXiv Detail & Related papers (2023-04-11T13:50:39Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - Bounding, Concentrating, and Truncating: Unifying Privacy Loss
Composition for Data Analytics [2.614355818010333]
We provide strong privacy loss bounds when an analyst may select pure DP, bounded range (e.g. exponential mechanisms) or concentrated DP mechanisms in any order.
We also provide optimal privacy loss bounds that apply when an analyst can select pure DP and bounded range mechanisms in a batch.
arXiv Detail & Related papers (2020-04-15T17:33:10Z) - Privacy-Preserving Public Release of Datasets for Support Vector Machine
Classification [14.095523601311374]
We consider the problem of publicly releasing a dataset for support vector machine classification while not infringing on the privacy of data subjects.
The dataset is systematically obfuscated using an additive noise for privacy protection.
Conditions are established for ensuring that the classifier extracted from the original dataset and the obfuscated one are close to each other.
arXiv Detail & Related papers (2019-12-29T03:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.