Towards a User Privacy-Aware Mobile Gaming App Installation Prediction
Model
- URL: http://arxiv.org/abs/2302.03332v2
- Date: Tue, 28 Mar 2023 08:09:27 GMT
- Title: Towards a User Privacy-Aware Mobile Gaming App Installation Prediction
Model
- Authors: Ido Zehori, Nevo Itzhak, Yuval Shahar and Mia Dor Schiller
- Abstract summary: We investigate the process of predicting a mobile gaming app installation from the point of view of a demand-side platform.
We explore the trade-off between privacy preservation and model performance.
We conclude that privacy-aware models might still preserve significant capabilities.
- Score: 0.8602553195689513
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Over the past decade, programmatic advertising has received a great deal of
attention in the online advertising industry. A real-time bidding (RTB) system
is rapidly becoming the most popular method to buy and sell online advertising
impressions. Within the RTB system, demand-side platforms (DSP) aim to spend
advertisers' campaign budgets efficiently while maximizing profit, seeking
impressions that result in high user responses, such as clicks or installs. In
the current study, we investigate the process of predicting a mobile gaming app
installation from the point of view of a particular DSP, while paying attention
to user privacy, and exploring the trade-off between privacy preservation and
model performance. There are multiple levels of potential threats to user
privacy, depending on the privacy leaks associated with the data-sharing
process, such as data transformation or de-anonymization. To address these
concerns, privacy-preserving techniques were proposed, such as cryptographic
approaches, for training privacy-aware machine-learning models. However, the
ability to train a mobile gaming app installation prediction model without
using user-level data, can prevent these threats and protect the users'
privacy, even though the model's ability to predict may be impaired.
Additionally, current laws might force companies to declare that they are
collecting data, and might even give the user the option to opt out of such
data collection, which might threaten companies' business models in digital
advertising, which are dependent on the collection and use of user-level data.
We conclude that privacy-aware models might still preserve significant
capabilities, enabling companies to make better decisions, dependent on the
privacy-efficacy trade-off utility function of each case.
Related papers
- FT-PrivacyScore: Personalized Privacy Scoring Service for Machine Learning Participation [4.772368796656325]
In practice, controlled data access remains a mainstream method for protecting data privacy in many industrial and research environments.
We developed the demo prototype FT-PrivacyScore to show that it's possible to efficiently and quantitatively estimate the privacy risk of participating in a model fine-tuning task.
arXiv Detail & Related papers (2024-10-30T02:41:26Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Can Language Models be Instructed to Protect Personal Information? [30.187731765653428]
We introduce PrivQA -- a benchmark to assess the privacy/utility trade-off when a model is instructed to protect specific categories of personal information in a simulated scenario.
We find that adversaries can easily circumvent these protections with simple jailbreaking methods through textual and/or image inputs.
We believe PrivQA has the potential to support the development of new models with improved privacy protections, as well as the adversarial robustness of these protections.
arXiv Detail & Related papers (2023-10-03T17:30:33Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - Protecting User Privacy in Online Settings via Supervised Learning [69.38374877559423]
We design an intelligent approach to online privacy protection that leverages supervised learning.
By detecting and blocking data collection that might infringe on a user's privacy, we can restore a degree of digital privacy to the user.
arXiv Detail & Related papers (2023-04-06T05:20:16Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - Challenges and approaches to privacy preserving post-click conversion
prediction [3.4071263815701336]
We provide an overview of the challenges and constraints when learning conversion models in this setting.
We introduce a novel approach for training these models that makes use of post-ranking signals.
We show using offline experiments on real world data that it outperforms a model relying on opt-in data alone.
arXiv Detail & Related papers (2022-01-29T21:36:01Z) - Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure
Dataset Release [52.504589728136615]
We develop a data poisoning method by which publicly released data can be minimally modified to prevent others from train-ing models on it.
We demonstrate the success of our approach onImageNet classification and on facial recognition.
arXiv Detail & Related papers (2021-02-16T19:12:34Z) - PCAL: A Privacy-preserving Intelligent Credit Risk Modeling Framework
Based on Adversarial Learning [111.19576084222345]
This paper proposes a framework of Privacy-preserving Credit risk modeling based on Adversarial Learning (PCAL)
PCAL aims to mask the private information inside the original dataset, while maintaining the important utility information for the target prediction task performance.
Results indicate that PCAL can learn an effective, privacy-free representation from user data, providing a solid foundation towards privacy-preserving machine learning for credit risk analysis.
arXiv Detail & Related papers (2020-10-06T07:04:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.