Privacy-Preserving Fair Item Ranking
- URL: http://arxiv.org/abs/2303.02916v1
- Date: Mon, 6 Mar 2023 06:21:20 GMT
- Title: Privacy-Preserving Fair Item Ranking
- Authors: Jia Ao Sun, Sikha Pentyala, Martine De Cock, Golnoosh Farnadi
- Abstract summary: This work is the first to advance research at the conjunction of producer (item) fairness and consumer (user) privacy in rankings.
Our work extends the equity of amortized attention ranking mechanism to be privacy-preserving, and we evaluate its effects with respect to privacy, fairness, and ranking quality.
- Score: 13.947606247944597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Users worldwide access massive amounts of curated data in the form of
rankings on a daily basis. The societal impact of this ease of access has been
studied and work has been done to propose and enforce various notions of
fairness in rankings. Current computational methods for fair item ranking rely
on disclosing user data to a centralized server, which gives rise to privacy
concerns for the users. This work is the first to advance research at the
conjunction of producer (item) fairness and consumer (user) privacy in rankings
by exploring the incorporation of privacy-preserving techniques; specifically,
differential privacy and secure multi-party computation. Our work extends the
equity of amortized attention ranking mechanism to be privacy-preserving, and
we evaluate its effects with respect to privacy, fairness, and ranking quality.
Our results using real-world datasets show that we are able to effectively
preserve the privacy of users and mitigate unfairness of items without making
additional sacrifices to the quality of rankings in comparison to the ranking
mechanism in the clear.
Related papers
- Privacy-Enhanced Database Synthesis for Benchmark Publishing [16.807486872855534]
Differential privacy has become a key method for safeguarding privacy when sharing data, but the focus has largely been on minimizing errors in aggregate queries or classification tasks.
This paper delves into the creation of privacy-preserving databases specifically for benchmarking, aiming to produce a differentially private database.
PrivBench uses sum-product networks (SPNs) to partition and sample data, enhancing data representation while securing privacy.
arXiv Detail & Related papers (2024-05-02T14:20:24Z) - Rate-Optimal Rank Aggregation with Private Pairwise Rankings [12.511220449652384]
We address the challenge of preserving privacy while ensuring the utility of rank aggregation based on pairwise rankings.
Motivated by this, we propose adaptively debiasing the rankings from the randomized response mechanism.
arXiv Detail & Related papers (2024-02-26T18:05:55Z) - Privacy and Fairness in Federated Learning: on the Perspective of
Trade-off [58.204074436129716]
Federated learning (FL) has been a hot topic in recent years.
As two crucial ethical notions, the interactions between privacy and fairness are comparatively less studied.
arXiv Detail & Related papers (2023-06-25T04:38:19Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Protecting User Privacy in Online Settings via Supervised Learning [69.38374877559423]
We design an intelligent approach to online privacy protection that leverages supervised learning.
By detecting and blocking data collection that might infringe on a user's privacy, we can restore a degree of digital privacy to the user.
arXiv Detail & Related papers (2023-04-06T05:20:16Z) - Ranking Differential Privacy [17.826241775212786]
Existing works mainly develop privacy protection on a single ranking within a set of ranking or pairwise comparisons of a ranking under the $epsilon$-differential privacy.
This paper proposes a novel notion called $epsilon$-ranking differential privacy for protecting ranks.
We develop a multistage ranking algorithm to generate synthetic rankings while satisfying the developed $epsilon$-ranking differential privacy.
arXiv Detail & Related papers (2023-01-02T19:12:42Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Lessons from the AdKDD'21 Privacy-Preserving ML Challenge [57.365745458033075]
A prominent proposal at W3C only allows sharing advertising signals through aggregated, differentially private reports of past displays.
To study this proposal extensively, an open Privacy-Preserving Machine Learning Challenge took place at AdKDD'21.
A key finding is that learning models on large, aggregated data in the presence of a small set of unaggregated data points can be surprisingly efficient and cheap.
arXiv Detail & Related papers (2022-01-31T11:09:59Z) - Towards a Data Privacy-Predictive Performance Trade-off [2.580765958706854]
We evaluate the existence of a trade-off between data privacy and predictive performance in classification tasks.
Unlike previous literature, we confirm that the higher the level of privacy, the higher the impact on predictive performance.
arXiv Detail & Related papers (2022-01-13T21:48:51Z) - Privacy-Preserving Boosting in the Local Setting [17.375582978294105]
In machine learning, boosting is one of the most popular methods that designed to combine multiple base learners to a superior one.
In the big data era, the data held by individual and entities, like personal images, browsing history and census information, are more likely to contain sensitive information.
Local Differential Privacy is proposed as an effective privacy protection approach, which offers a strong guarantee to the data owners.
arXiv Detail & Related papers (2020-02-06T04:48:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.