But Can You Use It? Design Recommendations for Differentially Private Interactive Systems
- URL: http://arxiv.org/abs/2412.11794v1
- Date: Mon, 16 Dec 2024 14:07:16 GMT
- Title: But Can You Use It? Design Recommendations for Differentially Private Interactive Systems
- Authors: Liudas Panavas, Joshua Snoke, Erika Tyagi, Claire McKay Bowen, Aaron R. Williams,
- Abstract summary: This work outlines and considers the barriers to developing differentially private interactive systems for informing public policy.
We propose balancing three design considerations: privacy assurance, statistical utility, and system usability.
Our work seeks to move the practical development of differentially private interactive systems forward to better aid public policy making and spark future research.
- Score: 0.499320937849508
- License:
- Abstract: Accessing data collected by federal statistical agencies is essential for public policy research and improving evidence-based decision making, such as evaluating the effectiveness of social programs, understanding demographic shifts, or addressing public health challenges. Differentially private interactive systems, or validation servers, can form a crucial part of the data-sharing infrastructure. They may allow researchers to query targeted statistics, providing flexible, efficient access to specific insights, reducing the need for broad data releases and supporting timely, focused research. However, they have not yet been practically implemented. While substantial theoretical work has been conducted on the privacy and accuracy guarantees of differentially private mechanisms, prior efforts have not considered usability as an explicit goal of interactive systems. This work outlines and considers the barriers to developing differentially private interactive systems for informing public policy and offers an alternative way forward. We propose balancing three design considerations: privacy assurance, statistical utility, and system usability, we develop recommendations for making differentially private interactive systems work in practice, we present an example architecture based on these recommendations, and we provide an outline of how to conduct the necessary user-testing. Our work seeks to move the practical development of differentially private interactive systems forward to better aid public policy making and spark future research.
Related papers
- SoK: Usability Studies in Differential Privacy [3.4111656179349743]
Differential Privacy (DP) has emerged as a pivotal approach for safeguarding individual privacy in data analysis.
This paper presents a comprehensive systematization of existing research on the usability of and communication about DP.
arXiv Detail & Related papers (2024-12-22T02:21:57Z) - Collection, usage and privacy of mobility data in the enterprise and public administrations [55.2480439325792]
Security measures such as anonymization are needed to protect individuals' privacy.
Within our study, we conducted expert interviews to gain insights into practices in the field.
We survey privacy-enhancing methods in use, which generally do not comply with state-of-the-art standards of differential privacy.
arXiv Detail & Related papers (2024-07-04T08:29:27Z) - Centering Policy and Practice: Research Gaps around Usable Differential Privacy [12.340264479496375]
We argue that while differential privacy is a clean formulation in theory, it poses significant challenges in practice.
To bridge the gaps between differential privacy's promises and its real-world usability, researchers and practitioners must work together.
arXiv Detail & Related papers (2024-06-17T21:32:30Z) - Exploring Federated Unlearning: Analysis, Comparison, and Insights [101.64910079905566]
federated unlearning enables the selective removal of data from models trained in federated systems.
This paper examines existing federated unlearning approaches, examining their algorithmic efficiency, impact on model accuracy, and effectiveness in preserving privacy.
We propose the OpenFederatedUnlearning framework, a unified benchmark for evaluating federated unlearning methods.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Application of Text Analytics in Public Service Co-Creation: Literature
Review and Research Framework [68.8204255655161]
An alternative to the traditional top-down development of public services is co-creation of public services.
Co-creation promotes collaboration between stakeholders with the aim to create better public services and achieve public values.
We study existing works on the application of Text Analytics (TA) techniques on text data to support public service co-creation.
arXiv Detail & Related papers (2023-05-20T17:34:17Z) - A Survey on Federated Recommendation Systems [40.46436329232597]
Federated learning has been applied to recommendation systems to protect user privacy.
In federated learning settings, recommendation systems can train recommendation models only collecting the intermediate parameters instead of the real user data.
arXiv Detail & Related papers (2022-12-27T08:09:45Z) - A Field Guide to Federated Optimization [161.3779046812383]
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data.
This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms.
arXiv Detail & Related papers (2021-07-14T18:09:08Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.