Stochastic Differentially Private and Fair Learning
- URL: http://arxiv.org/abs/2210.08781v2
- Date: Sat, 3 Jun 2023 21:49:35 GMT
- Title: Stochastic Differentially Private and Fair Learning
- Authors: Andrew Lowy, Devansh Gupta, Meisam Razaviyayn
- Abstract summary: We provide the first differentially private algorithm for fair learning that is guaranteed to converge.
Our framework is flexible enough to permit different fairness, including demographic parity and equalized odds.
Our algorithm can be applied to non-binary classification tasks with multiple (non-binary) sensitive attributes.
- Score: 7.971065005161566
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning models are increasingly used in high-stakes decision-making
systems. In such applications, a major concern is that these models sometimes
discriminate against certain demographic groups such as individuals with
certain race, gender, or age. Another major concern in these applications is
the violation of the privacy of users. While fair learning algorithms have been
developed to mitigate discrimination issues, these algorithms can still leak
sensitive information, such as individuals' health or financial records.
Utilizing the notion of differential privacy (DP), prior works aimed at
developing learning algorithms that are both private and fair. However,
existing algorithms for DP fair learning are either not guaranteed to converge
or require full batch of data in each iteration of the algorithm to converge.
In this paper, we provide the first stochastic differentially private algorithm
for fair learning that is guaranteed to converge. Here, the term "stochastic"
refers to the fact that our proposed algorithm converges even when minibatches
of data are used at each iteration (i.e. stochastic optimization). Our
framework is flexible enough to permit different fairness notions, including
demographic parity and equalized odds. In addition, our algorithm can be
applied to non-binary classification tasks with multiple (non-binary) sensitive
attributes. As a byproduct of our convergence analysis, we provide the first
utility guarantee for a DP algorithm for solving nonconvex-strongly concave
min-max problems. Our numerical experiments show that the proposed algorithm
consistently offers significant performance gains over the state-of-the-art
baselines, and can be applied to larger scale problems with non-binary
target/sensitive attributes.
Related papers
- A Stochastic Optimization Framework for Private and Fair Learning From Decentralized Data [14.748203847227542]
We develop a novel algorithm for private and fair federated learning (FL)
Our algorithm satisfies inter-silo record-level differential privacy (ISRL-DP)
Experiments demonstrate the state-of-the-art fairness-accuracy framework tradeoffs of our algorithm across different privacy levels.
arXiv Detail & Related papers (2024-11-12T15:51:35Z) - A Gold Standard Dataset for the Reviewer Assignment Problem [117.59690218507565]
"Similarity score" is a numerical estimate of the expertise of a reviewer in reviewing a paper.
Our dataset consists of 477 self-reported expertise scores provided by 58 researchers.
For the task of ordering two papers in terms of their relevance for a reviewer, the error rates range from 12%-30% in easy cases to 36%-43% in hard cases.
arXiv Detail & Related papers (2023-03-23T16:15:03Z) - Differentially Private Federated Learning via Inexact ADMM with Multiple
Local Updates [0.0]
We develop a DP inexact alternating direction method of multipliers algorithm with multiple local updates for federated learning.
We show that our algorithm provides $barepsilon$-DP for every iteration, where $barepsilon$ is a privacy budget controlled by the user.
We demonstrate that our algorithm reduces the testing error by at most $31%$ compared with the existing DP algorithm, while achieving the same level of data privacy.
arXiv Detail & Related papers (2022-02-18T19:58:47Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - FAIRLEARN:Configurable and Interpretable Algorithmic Fairness [1.2183405753834557]
There is a need to mitigate any bias arising from either training samples or implicit assumptions made about the data samples.
Many approaches have been proposed to make learning algorithms fair by detecting and mitigating bias in different stages of optimization.
We propose the FAIRLEARN procedure that produces a fair algorithm by incorporating user constraints into the optimization procedure.
arXiv Detail & Related papers (2021-11-17T03:07:18Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Differentially Private Federated Learning via Inexact ADMM [0.0]
Differential privacy (DP) techniques can be applied to the federated learning model to protect data privacy against inference attacks.
We develop a DP inexact alternating direction method of multipliers algorithm that solves a sequence of trust-region subproblems.
Our algorithm reduces the testing error by at most $22%$ compared with the existing DP algorithm, while achieving the same level of data privacy.
arXiv Detail & Related papers (2021-06-11T02:28:07Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Tighter Generalization Bounds for Iterative Differentially Private
Learning Algorithms [95.73230376153872]
This paper studies the relationship between generalization and privacy preservation in iterative learning algorithms by two sequential steps.
We prove that $(varepsilon, delta)$-differential privacy implies an on-average generalization bound for multi-Database learning algorithms.
We then investigate how the iterative nature shared by most learning algorithms influence privacy preservation and further generalization.
arXiv Detail & Related papers (2020-07-18T09:12:03Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z) - Differentially Private k-Means Clustering with Guaranteed Convergence [5.335316436366718]
Iterative clustering algorithms help us to learn the insights behind the data.
It may allow adversaries to infer the privacy of individuals with some background knowledge.
To protect individual privacy against such an inference attack, preserving differential privacy (DP) for the iterative clustering algorithms has been extensively studied.
arXiv Detail & Related papers (2020-02-03T22:53:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.