The Fair Value of Data Under Heterogeneous Privacy Constraints in
Federated Learning
- URL: http://arxiv.org/abs/2301.13336v2
- Date: Sun, 4 Feb 2024 21:32:55 GMT
- Title: The Fair Value of Data Under Heterogeneous Privacy Constraints in
Federated Learning
- Authors: Justin Kang, Ramtin Pedarsani, Kannan Ramchandran
- Abstract summary: This paper puts forth an idea for a textitfair amount to compensate users for their data at a given privacy level based on an axiomatic definition of fairness.
We also formulate a heterogeneous federated learning problem for the platform with privacy level options for users.
- Score: 26.53734856637336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern data aggregation often involves a platform collecting data from a
network of users with various privacy options. Platforms must solve the problem
of how to allocate incentives to users to convince them to share their data.
This paper puts forth an idea for a \textit{fair} amount to compensate users
for their data at a given privacy level based on an axiomatic definition of
fairness, along the lines of the celebrated Shapley value. To the best of our
knowledge, these are the first fairness concepts for data that explicitly
consider privacy constraints. We also formulate a heterogeneous federated
learning problem for the platform with privacy level options for users. By
studying this problem, we investigate the amount of compensation users receive
under fair allocations with different privacy levels, amounts of data, and
degrees of heterogeneity. We also discuss what happens when the platform is
forced to design fair incentives. Under certain conditions we find that when
privacy sensitivity is low, the platform will set incentives to ensure that it
collects all the data with the lowest privacy options. When the privacy
sensitivity is above a given threshold, the platform will provide no incentives
to users. Between these two extremes, the platform will set the incentives so
some fraction of the users chooses the higher privacy option and the others
chooses the lower privacy option.
Related papers
- Federated Transfer Learning with Differential Privacy [21.50525027559563]
We formulate the notion of textitfederated differential privacy, which offers privacy guarantees for each data set without assuming a trusted central server.
We show that federated differential privacy is an intermediate privacy model between the well-established local and central models of differential privacy.
arXiv Detail & Related papers (2024-03-17T21:04:48Z) - Toward the Tradeoffs between Privacy, Fairness and Utility in Federated
Learning [10.473137837891162]
Federated Learning (FL) is a novel privacy-protection distributed machine learning paradigm.
We propose a privacy-protection fairness FL method to protect the privacy of the client model.
We conclude the relationship between privacy, fairness and utility, and there is a tradeoff between these.
arXiv Detail & Related papers (2023-11-30T02:19:35Z) - Mean Estimation Under Heterogeneous Privacy Demands [5.755004576310333]
This work considers the problem of mean estimation, where each user can impose their own privacy level.
The algorithm we propose is shown to be minimax optimal and has a near-linear run-time.
Users with less but differing privacy requirements are all given more privacy than they require, in equal amounts.
arXiv Detail & Related papers (2023-10-19T20:29:19Z) - Protecting User Privacy in Online Settings via Supervised Learning [69.38374877559423]
We design an intelligent approach to online privacy protection that leverages supervised learning.
By detecting and blocking data collection that might infringe on a user's privacy, we can restore a degree of digital privacy to the user.
arXiv Detail & Related papers (2023-04-06T05:20:16Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Equity and Privacy: More Than Just a Tradeoff [10.545898004301323]
Recent work has shown that privacy preserving data publishing can introduce different levels of utility across different population groups.
Will marginal populations see disproportionately less utility from privacy technology?
If there is an inequity how can we address it?
arXiv Detail & Related papers (2021-11-08T17:39:32Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - The Challenges and Impact of Privacy Policy Comprehension [0.0]
This paper experimentally manipulated the privacy-friendliness of an unavoidable and simple privacy policy.
Half of our participants miscomprehended even this transparent privacy policy.
To mitigate such pitfalls we present design recommendations to improve the quality of informed consent.
arXiv Detail & Related papers (2020-05-18T14:16:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.