Fairness-aware Differentially Private Collaborative Filtering
- URL: http://arxiv.org/abs/2303.09527v1
- Date: Thu, 16 Mar 2023 17:44:39 GMT
- Title: Fairness-aware Differentially Private Collaborative Filtering
- Authors: Zhenhuan Yang, Yingqiang Ge, Congzhe Su, Dingxian Wang, Xiaoting Zhao,
Yiming Ying
- Abstract summary: We propose textbfDP-Fair, a two-stage framework for collaborative filtering based algorithms.
Specifically, it combines differential privacy mechanisms with fairness constraints to protect user privacy while ensuring fair recommendations.
- Score: 22.815168994407358
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, there has been an increasing adoption of differential privacy
guided algorithms for privacy-preserving machine learning tasks. However, the
use of such algorithms comes with trade-offs in terms of algorithmic fairness,
which has been widely acknowledged. Specifically, we have empirically observed
that the classical collaborative filtering method, trained by differentially
private stochastic gradient descent (DP-SGD), results in a disparate impact on
user groups with respect to different user engagement levels. This, in turn,
causes the original unfair model to become even more biased against inactive
users. To address the above issues, we propose \textbf{DP-Fair}, a two-stage
framework for collaborative filtering based algorithms. Specifically, it
combines differential privacy mechanisms with fairness constraints to protect
user privacy while ensuring fair recommendations. The experimental results,
based on Amazon datasets, and user history logs collected from Etsy, one of the
largest e-commerce platforms, demonstrate that our proposed method exhibits
superior performance in terms of both overall accuracy and user group fairness
on both shallow and deep recommendation models compared to vanilla DP-SGD.
Related papers
- Online Clustering of Dueling Bandits [59.09590979404303]
We introduce the first "clustering of dueling bandit algorithms" to enable collaborative decision-making based on preference feedback.
We propose two novel algorithms: (1) Clustering of Linear Dueling Bandits (COLDB) which models the user reward functions as linear functions of the context vectors, and (2) Clustering of Neural Dueling Bandits (CONDB) which uses a neural network to model complex, non-linear user reward functions.
arXiv Detail & Related papers (2025-02-04T07:55:41Z) - Differentially Private Random Feature Model [52.468511541184895]
We produce a differentially private random feature model for privacy-preserving kernel machines.
We show that our method preserves privacy and derive a generalization error bound for the method.
arXiv Detail & Related papers (2024-12-06T05:31:08Z) - Personalized Federated Collaborative Filtering: A Variational AutoEncoder Approach [49.63614966954833]
Federated Collaborative Filtering (FedCF) is an emerging field focused on developing a new recommendation framework with preserving privacy.
Existing FedCF methods typically combine distributed Collaborative Filtering (CF) algorithms with privacy-preserving mechanisms, and then preserve personalized information into a user embedding vector.
This paper proposes a novel personalized FedCF method by preserving users' personalized information into a latent variable and a neural model simultaneously.
arXiv Detail & Related papers (2024-08-16T05:49:14Z) - An Empirical Analysis of Fairness Notions under Differential Privacy [3.3748750222488657]
We show how different fairness notions, belonging to distinct classes of statistical fairness criteria, are impacted when one selects a model architecture suitable for DP-SGD.
These findings challenge the understanding that differential privacy will necessarily exacerbate unfairness in deep learning models trained on biased datasets.
arXiv Detail & Related papers (2023-02-06T16:29:50Z) - Subject Granular Differential Privacy in Federated Learning [2.9439848714137447]
We propose two new algorithms that enforce subject level DP at each federation user locally.
Our first algorithm, called LocalGroupDP, is a straightforward application of group differential privacy in the popular DP-SGD algorithm.
Our second algorithm is based on a novel idea of hierarchical gradient averaging (HiGradAvgDP) for subjects participating in a training mini-batch.
arXiv Detail & Related papers (2022-06-07T23:54:36Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Federated Learning for Face Recognition with Gradient Correction [52.896286647898386]
In this work, we introduce a framework, FedGC, to tackle federated learning for face recognition.
We show that FedGC constitutes a valid loss function similar to standard softmax.
arXiv Detail & Related papers (2021-12-14T09:19:29Z) - Differentially Private Federated Learning on Heterogeneous Data [10.431137628048356]
Federated Learning (FL) is a paradigm for large-scale distributed learning.
It faces two key challenges: (i) efficient training from highly heterogeneous user data, and (ii) protecting the privacy of participating users.
We propose a novel FL approach to tackle these two challenges together by incorporating Differential Privacy (DP) constraints.
arXiv Detail & Related papers (2021-11-17T18:23:49Z) - Private Alternating Least Squares: Practical Private Matrix Completion
with Tighter Rates [34.023599653814415]
We study the problem of differentially private (DP) matrix completion under user-level privacy.
We design a joint differentially private variant of the popular Alternating-Least-Squares (ALS) method.
arXiv Detail & Related papers (2021-07-20T23:19:11Z) - Antipodes of Label Differential Privacy: PATE and ALIBI [2.2761657094500682]
We consider the privacy-preserving machine learning (ML) setting where the trained model must satisfy differential privacy (DP)
We propose two novel approaches based on, respectively, the Laplace mechanism and the PATE framework.
We show how to achieve very strong privacy levels in some regimes, with our adaptation of the PATE framework.
arXiv Detail & Related papers (2021-06-07T08:14:32Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.