Fairness-aware Differentially Private Collaborative Filtering
- URL: http://arxiv.org/abs/2303.09527v1
- Date: Thu, 16 Mar 2023 17:44:39 GMT
- Title: Fairness-aware Differentially Private Collaborative Filtering
- Authors: Zhenhuan Yang, Yingqiang Ge, Congzhe Su, Dingxian Wang, Xiaoting Zhao,
Yiming Ying
- Abstract summary: We propose textbfDP-Fair, a two-stage framework for collaborative filtering based algorithms.
Specifically, it combines differential privacy mechanisms with fairness constraints to protect user privacy while ensuring fair recommendations.
- Score: 22.815168994407358
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, there has been an increasing adoption of differential privacy
guided algorithms for privacy-preserving machine learning tasks. However, the
use of such algorithms comes with trade-offs in terms of algorithmic fairness,
which has been widely acknowledged. Specifically, we have empirically observed
that the classical collaborative filtering method, trained by differentially
private stochastic gradient descent (DP-SGD), results in a disparate impact on
user groups with respect to different user engagement levels. This, in turn,
causes the original unfair model to become even more biased against inactive
users. To address the above issues, we propose \textbf{DP-Fair}, a two-stage
framework for collaborative filtering based algorithms. Specifically, it
combines differential privacy mechanisms with fairness constraints to protect
user privacy while ensuring fair recommendations. The experimental results,
based on Amazon datasets, and user history logs collected from Etsy, one of the
largest e-commerce platforms, demonstrate that our proposed method exhibits
superior performance in terms of both overall accuracy and user group fairness
on both shallow and deep recommendation models compared to vanilla DP-SGD.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Personalized Federated Collaborative Filtering: A Variational AutoEncoder Approach [49.63614966954833]
Federated Collaborative Filtering (FedCF) is an emerging field focused on developing a new recommendation framework with preserving privacy.
This paper proposes a novel personalized FedCF method by preserving users' personalized information into a latent variable and a neural model simultaneously.
To effectively train the proposed framework, we model the problem as a specialized Variational AutoEncoder (VAE) task by integrating user interaction vector reconstruction with missing value prediction.
arXiv Detail & Related papers (2024-08-16T05:49:14Z) - An Empirical Analysis of Fairness Notions under Differential Privacy [3.3748750222488657]
We show how different fairness notions, belonging to distinct classes of statistical fairness criteria, are impacted when one selects a model architecture suitable for DP-SGD.
These findings challenge the understanding that differential privacy will necessarily exacerbate unfairness in deep learning models trained on biased datasets.
arXiv Detail & Related papers (2023-02-06T16:29:50Z) - Differentially Private Federated Clustering over Non-IID Data [59.611244450530315]
clustering clusters (FedC) problem aims to accurately partition unlabeled data samples distributed over massive clients into finite clients under the orchestration of a server.
We propose a novel FedC algorithm using differential privacy convergence technique, referred to as DP-Fed, in which partial participation and multiple clients are also considered.
Various attributes of the proposed DP-Fed are obtained through theoretical analyses of privacy protection, especially for the case of non-identically and independently distributed (non-i.i.d.) data.
arXiv Detail & Related papers (2023-01-03T05:38:43Z) - Subject Granular Differential Privacy in Federated Learning [2.9439848714137447]
We propose two new algorithms that enforce subject level DP at each federation user locally.
Our first algorithm, called LocalGroupDP, is a straightforward application of group differential privacy in the popular DP-SGD algorithm.
Our second algorithm is based on a novel idea of hierarchical gradient averaging (HiGradAvgDP) for subjects participating in a training mini-batch.
arXiv Detail & Related papers (2022-06-07T23:54:36Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Federated Learning for Face Recognition with Gradient Correction [52.896286647898386]
In this work, we introduce a framework, FedGC, to tackle federated learning for face recognition.
We show that FedGC constitutes a valid loss function similar to standard softmax.
arXiv Detail & Related papers (2021-12-14T09:19:29Z) - Differentially Private Federated Learning on Heterogeneous Data [10.431137628048356]
Federated Learning (FL) is a paradigm for large-scale distributed learning.
It faces two key challenges: (i) efficient training from highly heterogeneous user data, and (ii) protecting the privacy of participating users.
We propose a novel FL approach to tackle these two challenges together by incorporating Differential Privacy (DP) constraints.
arXiv Detail & Related papers (2021-11-17T18:23:49Z) - Private Alternating Least Squares: Practical Private Matrix Completion
with Tighter Rates [34.023599653814415]
We study the problem of differentially private (DP) matrix completion under user-level privacy.
We design a joint differentially private variant of the popular Alternating-Least-Squares (ALS) method.
arXiv Detail & Related papers (2021-07-20T23:19:11Z) - Antipodes of Label Differential Privacy: PATE and ALIBI [2.2761657094500682]
We consider the privacy-preserving machine learning (ML) setting where the trained model must satisfy differential privacy (DP)
We propose two novel approaches based on, respectively, the Laplace mechanism and the PATE framework.
We show how to achieve very strong privacy levels in some regimes, with our adaptation of the PATE framework.
arXiv Detail & Related papers (2021-06-07T08:14:32Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.