Cookie Monster: Efficient On-device Budgeting for Differentially-Private Ad-Measurement Systems
- URL: http://arxiv.org/abs/2405.16719v5
- Date: Tue, 01 Oct 2024 20:06:48 GMT
- Title: Cookie Monster: Efficient On-device Budgeting for Differentially-Private Ad-Measurement Systems
- Authors: Pierre Tholoniat, Kelly Kostopoulou, Peter McNeely, Prabhpreet Singh Sodhi, Anirudh Varanasi, Benjamin Case, Asaf Cidon, Roxana Geambasu, Mathias Lécuyer,
- Abstract summary: This paper discusses our efforts to enhance existing privacy-preserving advertising measurement APIs.
We analyze designs from Google, Apple, Meta and Mozilla, and augment them with a more rigorous and efficient differential privacy (DP) budgeting component.
Our approach, called Cookie Monster, enforces well-defined DP guarantees and enables advertisers to conduct more private measurement queries accurately.
- Score: 3.0175040505598707
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the impending removal of third-party cookies from major browsers and the introduction of new privacy-preserving advertising APIs, the research community has a timely opportunity to assist industry in qualitatively improving the Web's privacy. This paper discusses our efforts, within a W3C community group, to enhance existing privacy-preserving advertising measurement APIs. We analyze designs from Google, Apple, Meta and Mozilla, and augment them with a more rigorous and efficient differential privacy (DP) budgeting component. Our approach, called Cookie Monster, enforces well-defined DP guarantees and enables advertisers to conduct more private measurement queries accurately. By framing the privacy guarantee in terms of an individual form of DP, we can make DP budgeting more efficient than in current systems that use a traditional DP definition. We incorporate Cookie Monster into Chrome and evaluate it on microbenchmarks and advertising datasets. Across workloads, Cookie Monster significantly outperforms baselines in enabling more advertising measurements under comparable DP protection.
Related papers
- Machine Learning with Privacy for Protected Attributes [56.44253915927481]
We refine the definition of differential privacy (DP) to create a more general and flexible framework that we call feature differential privacy (FDP)<n>Our definition is simulation-based and allows for both addition/removal and replacement variants of privacy, and can handle arbitrary separation of protected and non-protected features.<n>We apply our framework to various machine learning tasks and show that it can significantly improve the utility of DP-trained models when public features are available.
arXiv Detail & Related papers (2025-06-24T17:53:28Z) - Big Bird: Privacy Budget Management for W3C's Privacy-Preserving Attribution API [4.162760892964329]
We present Big Bird, a privacy budget manager for Privacy-Preserving Attribution ( PPA)<n>Big Bird enforces utility-preserving limits via quota budgets and improves global budget utilization through a novel batched scheduling algorithm.<n>We implement Big Bird in Firefox and evaluate it on real-world ad data, demonstrating its resilience and effectiveness.
arXiv Detail & Related papers (2025-06-05T17:45:13Z) - CriteoPrivateAds: A Real-World Bidding Dataset to Design Private Advertising Systems [36.80382261547636]
This dataset stands for an anonymised version of Criteo production logs.
It provides sufficient data to learn bidding models commonly used in online advertising under many privacy constraints.
arXiv Detail & Related papers (2025-02-17T18:24:48Z) - On the Differential Privacy and Interactivity of Privacy Sandbox Reports [78.85958224681858]
The Privacy Sandbox initiative from Google includes APIs for enabling privacy-preserving advertising functionalities.
We provide an abstract model for analyzing the privacy of these APIs and show that they satisfy a formal DP guarantee.
arXiv Detail & Related papers (2024-12-22T08:22:57Z) - Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Click Without Compromise: Online Advertising Measurement via Per User Differential Privacy [22.38999810583601]
We introduce Ads-BPC, a novel user-level differential privacy protection scheme for advertising measurement results.
Ads-BPC achieves a 25% to 50% increase in accuracy over existing streaming DP mechanisms applied to advertising measurement.
arXiv Detail & Related papers (2024-06-04T16:31:19Z) - Evaluating Google's Protected Audience Protocol [7.737740676767729]
Google has proposed the Privacy Sandbox initiative to enable ad targeting without third-party cookies.
This work focuses on analyzing linkage privacy risks for the reporting mechanisms proposed in the Protected Audience proposal.
arXiv Detail & Related papers (2024-05-13T18:28:56Z) - Metric Differential Privacy at the User-Level Via the Earth Mover's Distance [34.63551774740707]
Metric differential privacy (DP) provides heterogeneous privacy guarantees based on a distance between the pair of inputs.
In this paper, we initiate the study of one natural definition of metric DP at the user-level.
We design two novel mechanisms under $d_textsfEM$-DP to answer linear queries and item-wise queries.
arXiv Detail & Related papers (2024-05-04T13:29:11Z) - A Quantitative Information Flow Analysis of the Topics API [0.34952465649465553]
We analyze the re-identification risk for individual Internet users introduced by the Topics API from the perspective of information- and decision-theoretic framework.
Our model allows a theoretical analysis of both privacy and utility aspects of the API and their trade-off, and we show that the Topics API does have better privacy than third-party cookies.
arXiv Detail & Related papers (2023-09-26T08:14:37Z) - Personalized DP-SGD using Sampling Mechanisms [5.50042037663784]
We extend Differentially Private Gradient Descent (DP-SGD) to support a recent privacy notion called ($Phi$,$Delta$)- Personalized Differential Privacy (($Phi$,$Delta$)- PDP.
Our algorithm uses a multi-round personalized sampling mechanism and embeds it within the DP-SGD iteration.
Experiments on real datasets show that our algorithm outperforms DP-SGD and simple combinations of DP-SGD with existing PDP mechanisms.
arXiv Detail & Related papers (2023-05-24T13:56:57Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.