Preference-Based Privacy Trading
- URL: http://arxiv.org/abs/2012.05484v1
- Date: Thu, 10 Dec 2020 07:03:10 GMT
- Title: Preference-Based Privacy Trading
- Authors: Ranjan Pal, Yixuan Wang, Swades De, Bodhibrata Nag, Pan Hui
- Abstract summary: We propose a design of regulated efficient/bounded inefficient economic mechanisms for oligopoly data trading markets using a novel preference function bidding approach on a simplified sellers-broker market.
Our methodology preserves the heterogeneous privacy preservation constraints (at a grouped consumer, i.e., app, level) upto certain compromise levels, and at the same time satisfies information demand (via the broker) of agencies.
- Score: 19.23266277956912
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The question we raise through this paper is: Is it economically feasible to
trade consumer personal information with their formal consent (permission) and
in return provide them incentives (monetary or otherwise)?. In view of (a) the
behavioral assumption that humans are `compromising' beings and have privacy
preferences, (b) privacy as a good not having strict boundaries, and (c) the
practical inevitability of inappropriate data leakage by data holders
downstream in the data-release supply-chain, we propose a design of regulated
efficient/bounded inefficient economic mechanisms for oligopoly data trading
markets using a novel preference function bidding approach on a simplified
sellers-broker market. Our methodology preserves the heterogeneous privacy
preservation constraints (at a grouped consumer, i.e., app, level) upto certain
compromise levels, and at the same time satisfies information demand (via the
broker) of agencies (e.g., advertising organizations) that collect client data
for the purpose of targeted behavioral advertising.
Related papers
- Language Models Can Reduce Asymmetry in Information Markets [100.38786498942702]
We introduce an open-source simulated digital marketplace where intelligent agents, powered by language models, buy and sell information on behalf of external participants.
The central mechanism enabling this marketplace is the agents' dual capabilities: they have the capacity to assess the quality of privileged information but also come equipped with the ability to forget.
To perform well, agents must make rational decisions, strategically explore the marketplace through generated sub-queries, and synthesize answers from purchased information.
arXiv Detail & Related papers (2024-03-21T14:48:37Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Towards a User Privacy-Aware Mobile Gaming App Installation Prediction
Model [0.8602553195689513]
We investigate the process of predicting a mobile gaming app installation from the point of view of a demand-side platform.
We explore the trade-off between privacy preservation and model performance.
We conclude that privacy-aware models might still preserve significant capabilities.
arXiv Detail & Related papers (2023-02-07T09:14:59Z) - Group privacy for personalized federated learning [4.30484058393522]
Federated learning is a type of collaborative machine learning, where participating clients process their data locally, sharing only updates to the collaborative model.
We propose a method to provide group privacy guarantees exploiting some key properties of $d$-privacy.
arXiv Detail & Related papers (2022-06-07T15:43:45Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - Causally Constrained Data Synthesis for Private Data Release [36.80484740314504]
Using synthetic data which reflects certain statistical properties of the original data preserves the privacy of the original data.
Prior works utilize differentially private data release mechanisms to provide formal privacy guarantees.
We propose incorporating causal information into the training process to favorably modify the aforementioned trade-off.
arXiv Detail & Related papers (2021-05-27T13:46:57Z) - Achieving Transparency Report Privacy in Linear Time [1.9981375888949475]
We first investigate and demonstrate potential privacy hazards brought on by the deployment of transparency and fairness measures in released ATRs.
We then propose a linear-time optimal-privacy scheme, built upon standard linear fractional programming (LFP) theory, for announcing ATRs.
We quantify the privacy-utility trade-offs induced by our scheme, and analyze the impact of privacy perturbation on fairness measures in ATRs.
arXiv Detail & Related papers (2021-03-31T22:05:10Z) - PCAL: A Privacy-preserving Intelligent Credit Risk Modeling Framework
Based on Adversarial Learning [111.19576084222345]
This paper proposes a framework of Privacy-preserving Credit risk modeling based on Adversarial Learning (PCAL)
PCAL aims to mask the private information inside the original dataset, while maintaining the important utility information for the target prediction task performance.
Results indicate that PCAL can learn an effective, privacy-free representation from user data, providing a solid foundation towards privacy-preserving machine learning for credit risk analysis.
arXiv Detail & Related papers (2020-10-06T07:04:59Z) - A vision for global privacy bridges: Technical and legal measures for
international data markets [77.34726150561087]
Despite data protection laws and an acknowledged right to privacy, trading personal information has become a business equated with "trading oil"
An open conflict is arising between business demands for data and a desire for privacy.
We propose and test a vision of a personal information market with privacy.
arXiv Detail & Related papers (2020-05-13T13:55:50Z) - Utility-aware Privacy-preserving Data Releasing [7.462336024223669]
We propose a two-step perturbation-based privacy-preserving data releasing framework.
First, certain predefined privacy and utility problems are learned from the public domain data.
We then leverage the learned knowledge to precisely perturb the data owners' data into privatized data.
arXiv Detail & Related papers (2020-05-09T05:32:46Z) - Beyond privacy regulations: an ethical approach to data usage in
transportation [64.86110095869176]
We describe how Federated Machine Learning can be applied to the transportation sector.
We see Federated Learning as a method that enables us to process privacy-sensitive data, while respecting customer's privacy.
arXiv Detail & Related papers (2020-04-01T15:10:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.