PeQES: A Platform for Privacy-enhanced Quantitative Empirical Studies
- URL: http://arxiv.org/abs/2103.05544v1
- Date: Tue, 9 Mar 2021 16:46:25 GMT
- Title: PeQES: A Platform for Privacy-enhanced Quantitative Empirical Studies
- Authors: Dominik Mei{\ss}ner, Felix Engelmann, Frank Kargl, Benjamin Erb
- Abstract summary: We establish a novel, privacy-enhanced workflow for pre-registered studies.
We also introduce PeQES, a corresponding platform that technically enforces the appropriate execution.
PeQES is the first platform to enable privacy-enhanced studies, to ensure the integrity of study protocols, and to safeguard the confidentiality of participants' data at the same time.
- Score: 6.782635275179198
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Empirical sciences and in particular psychology suffer a methodological
crisis due to the non-reproducibility of results, and in rare cases,
questionable research practices. Pre-registered studies and the publication of
raw data sets have emerged as effective countermeasures. However, this approach
represents only a conceptual procedure and may in some cases exacerbate privacy
issues associated with data publications. We establish a novel,
privacy-enhanced workflow for pre-registered studies. We also introduce PeQES,
a corresponding platform that technically enforces the appropriate execution
while at the same time protecting the participants' data from unauthorized use
or data repurposing. Our PeQES prototype proves the overall feasibility of our
privacy-enhanced workflow while introducing only a negligible performance
overhead for data acquisition and data analysis of an actual study. Using
trusted computing mechanisms, PeQES is the first platform to enable
privacy-enhanced studies, to ensure the integrity of study protocols, and to
safeguard the confidentiality of participants' data at the same time.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Towards Split Learning-based Privacy-Preserving Record Linkage [49.1574468325115]
Split Learning has been introduced to facilitate applications where user data privacy is a requirement.
In this paper, we investigate the potentials of Split Learning for Privacy-Preserving Record Matching.
arXiv Detail & Related papers (2024-09-02T09:17:05Z) - An applied Perspective: Estimating the Differential Identifiability Risk of an Exemplary SOEP Data Set [2.66269503676104]
We show how to compute the risk metric efficiently for a set of basic statistical queries.
Our empirical analysis based on an extensive, real-world scientific data set expands the knowledge on how to compute risks under realistic conditions.
arXiv Detail & Related papers (2024-07-04T17:50:55Z) - Collection, usage and privacy of mobility data in the enterprise and public administrations [55.2480439325792]
Security measures such as anonymization are needed to protect individuals' privacy.
Within our study, we conducted expert interviews to gain insights into practices in the field.
We survey privacy-enhancing methods in use, which generally do not comply with state-of-the-art standards of differential privacy.
arXiv Detail & Related papers (2024-07-04T08:29:27Z) - Conditional Density Estimations from Privacy-Protected Data [0.0]
We propose simulation-based inference methods from privacy-protected datasets.
We illustrate our methods on discrete time-series data under an infectious disease model and with ordinary linear regression models.
arXiv Detail & Related papers (2023-10-19T14:34:17Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - On the Privacy Risks of Algorithmic Recourse [17.33484111779023]
We make the first attempt at investigating if and how an adversary can leverage recourses to infer private information about the underlying model's training data.
Our work establishes unintended privacy leakage as an important risk in the widespread adoption of recourse methods.
arXiv Detail & Related papers (2022-11-10T09:04:24Z) - Private Set Generation with Discriminative Information [63.851085173614]
Differentially private data generation is a promising solution to the data privacy challenge.
Existing private generative models are struggling with the utility of synthetic samples.
We introduce a simple yet effective method that greatly improves the sample utility of state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-07T10:02:55Z) - No Free Lunch in "Privacy for Free: How does Dataset Condensation Help
Privacy" [75.98836424725437]
New methods designed to preserve data privacy require careful scrutiny.
Failure to preserve privacy is hard to detect, and yet can lead to catastrophic results when a system implementing a privacy-preserving'' method is attacked.
arXiv Detail & Related papers (2022-09-29T17:50:23Z) - A Critical Overview of Privacy-Preserving Approaches for Collaborative
Forecasting [0.0]
Cooperation between different data owners may lead to an improvement in forecast quality.
Due to business competitive factors and personal data protection questions, said data owners might be unwilling to share their data.
This paper analyses the state-of-the-art and unveils several shortcomings of existing methods in guaranteeing data privacy.
arXiv Detail & Related papers (2020-04-20T20:21:04Z) - Anonymizing Data for Privacy-Preserving Federated Learning [3.3673553810697827]
We propose the first syntactic approach for offering privacy in the context of federated learning.
Our approach aims to maximize utility or model performance, while supporting a defensible level of privacy.
We perform a comprehensive empirical evaluation on two important problems in the healthcare domain, using real-world electronic health data of 1 million patients.
arXiv Detail & Related papers (2020-02-21T02:30:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.