On the Differential Privacy and Interactivity of Privacy Sandbox Reports
- URL: http://arxiv.org/abs/2412.16916v1
- Date: Sun, 22 Dec 2024 08:22:57 GMT
- Title: On the Differential Privacy and Interactivity of Privacy Sandbox Reports
- Authors: Badih Ghazi, Charlie Harrison, Arpana Hosabettu, Pritish Kamath, Alexander Knop, Ravi Kumar, Ethan Leeman, Pasin Manurangsi, Vikas Sahu,
- Abstract summary: The Privacy Sandbox initiative from Google includes APIs for enabling privacy-preserving advertising functionalities.
We provide a formal model for analyzing the privacy of these APIs and show that they satisfy a formal DP guarantee.
- Score: 78.21466601986265
- License:
- Abstract: The Privacy Sandbox initiative from Google includes APIs for enabling privacy-preserving advertising functionalities as part of the effort around limiting third-party cookies. In particular, the Private Aggregation API (PAA) and the Attribution Reporting API (ARA) can be used for ad measurement while providing different guardrails for safeguarding user privacy, including a framework for satisfying differential privacy (DP). In this work, we provide a formal model for analyzing the privacy of these APIs and show that they satisfy a formal DP guarantee under certain assumptions. Our analysis handles the case where both the queries and database can change interactively based on previous responses from the API.
Related papers
- Inference Privacy: Properties and Mechanisms [8.471466670802817]
Inference Privacy (IP) can allow a user to interact with a model while providing a rigorous privacy guarantee for the users' data at inference.
We present two types of mechanisms for achieving IP: namely, input perturbations and output perturbations which are customizable by the users.
arXiv Detail & Related papers (2024-11-27T20:47:28Z) - Differential Privacy Overview and Fundamental Techniques [63.0409690498569]
This chapter is meant to be part of the book "Differential Privacy in Artificial Intelligence: From Theory to Practice"
It starts by illustrating various attempts to protect data privacy, emphasizing where and why they failed.
It then defines the key actors, tasks, and scopes that make up the domain of privacy-preserving data analysis.
arXiv Detail & Related papers (2024-11-07T13:52:11Z) - The Privacy-Utility Trade-off in the Topics API [0.34952465649465553]
We analyze the re-identification risks for individual Internet users and the utility provided to advertising companies by the Topics API.
We provide theoretical results dependent only on the API parameters that can be readily applied to evaluate the privacy and utility implications of future API updates.
arXiv Detail & Related papers (2024-06-21T17:01:23Z) - Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning [62.224804688233]
differential privacy (DP) offers a promising solution by ensuring models are 'almost indistinguishable' with or without any particular privacy unit.
We study user-level DP motivated by applications where it necessary to ensure uniform privacy protection across users.
arXiv Detail & Related papers (2024-06-20T13:54:32Z) - Summary Reports Optimization in the Privacy Sandbox Attribution Reporting API [51.00674811394867]
The Attribution Reporting API has been deployed by Google Chrome to support the basic advertising functionality of attribution reporting.
We present methods for optimizing the allocation of the contribution budget for summary reports from the API.
arXiv Detail & Related papers (2023-11-22T18:45:20Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.