Expert opinions on making GDPR usable
- URL: http://arxiv.org/abs/2308.08287v1
- Date: Wed, 16 Aug 2023 11:20:16 GMT
- Title: Expert opinions on making GDPR usable
- Authors: Johanna Johansen
- Abstract summary: We use as respondents experts working across fields of relevance to four concepts, including law and data protection/privacy, certifications and standardization, and usability.
We employ theory triangulation to analyze the data representing three groups of experts, categorized as 'criterias', 'law', and 'usability', coming both from industry and academia.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present the results of a study done in order to validate concepts and
methods that have been introduced in (Johansen and Fischer-Hubner, 2020.
"Making GDPR Usable: A Model to Support Usability Evaluations of Privacy." in
IFIP AICT 576, 275-291). We use as respondents in our interviews experts
working across fields of relevance to these concepts, including law and data
protection/privacy, certifications and standardization, and usability (as
studied in the field of Human-Computer Interaction). We study the experts'
opinions about four new concepts, namely: (i) a definition of Usable Privacy,
(ii) 30 Usable Privacy Goals identified as excerpts from the GDPR (European
General Data Protection Regulation), (iii) a set of 25 corresponding Usable
Privacy Criteria together with their multiple measurable sub-criteria, and (iv)
the Usable Privacy Cube model, which puts all these together with the EuroPriSe
certification criteria, with the purpose of making explicit several aspects of
certification processes such as orderings of criteria, interactions between
these, different stakeholder perspectives, and context of use/processing.
The expert opinions are varied, example-rich, and forward-looking, which
gives a impressive list of open problems where the above four concepts can work
as a foundation for further developments. We employed a critical qualitative
research, using theory triangulation to analyze the data representing three
groups of experts, categorized as 'certifications', 'law', and 'usability',
coming both from industry and academia. The results of our analysis show
agreement among the experts about the need for evaluations and measuring of
usability of privacy in order to allow for exercising data subjects' rights and
to evaluate the degree to which data controllers comply with the data
protection principles.
Related papers
- Are Data Experts Buying into Differentially Private Synthetic Data? Gathering Community Perspectives [14.736115103446101]
In the United States, differential privacy (DP) is the dominant technical operationalization of privacy-preserving data analysis.
This study qualitatively examines one class of DP mechanisms: private data synthesizers.
arXiv Detail & Related papers (2024-12-17T15:50:14Z) - A Comprehensive Study on GDPR-Oriented Analysis of Privacy Policies: Taxonomy, Corpus and GDPR Concept Classifiers [18.770985160731122]
We develop a more complete taxonomy, created the first corpus of labeled privacy policies with hierarchical information, and conducted the most comprehensive performance evaluation of concept classifiers for privacy policies.
Our work leads to multiple novel findings, including the confirmed inappropriateness of splitting training and test sets at the segment level, the benefits of considering hierarchical information, and the limitations of the "one size fits all" approach, and the significance of testing cross-corpus generalizability.
arXiv Detail & Related papers (2024-10-07T05:19:12Z) - An applied Perspective: Estimating the Differential Identifiability Risk of an Exemplary SOEP Data Set [2.66269503676104]
We show how to compute the risk metric efficiently for a set of basic statistical queries.
Our empirical analysis based on an extensive, real-world scientific data set expands the knowledge on how to compute risks under realistic conditions.
arXiv Detail & Related papers (2024-07-04T17:50:55Z) - Collection, usage and privacy of mobility data in the enterprise and public administrations [55.2480439325792]
Security measures such as anonymization are needed to protect individuals' privacy.
Within our study, we conducted expert interviews to gain insights into practices in the field.
We survey privacy-enhancing methods in use, which generally do not comply with state-of-the-art standards of differential privacy.
arXiv Detail & Related papers (2024-07-04T08:29:27Z) - Centering Policy and Practice: Research Gaps around Usable Differential Privacy [12.340264479496375]
We argue that while differential privacy is a clean formulation in theory, it poses significant challenges in practice.
To bridge the gaps between differential privacy's promises and its real-world usability, researchers and practitioners must work together.
arXiv Detail & Related papers (2024-06-17T21:32:30Z) - Sharing is CAIRing: Characterizing Principles and Assessing Properties
of Universal Privacy Evaluation for Synthetic Tabular Data [3.67056030380617]
We identify four principles for the assessment of metrics: Comparability, Applicability, Interpretability, and Representativeness (CAIR)
We study the applicability and usefulness of the CAIR principles and rubric by assessing a selection of metrics popular in other studies.
arXiv Detail & Related papers (2023-12-19T15:05:52Z) - When is Off-Policy Evaluation (Reward Modeling) Useful in Contextual Bandits? A Data-Centric Perspective [64.73162159837956]
evaluating the value of a hypothetical target policy with only a logged dataset is important but challenging.
We propose DataCOPE, a data-centric framework for evaluating a target policy given a dataset.
Our empirical analysis of DataCOPE in the logged contextual bandit settings using healthcare datasets confirms its ability to evaluate both machine-learning and human expert policies.
arXiv Detail & Related papers (2023-11-23T17:13:37Z) - Exploring Federated Unlearning: Analysis, Comparison, and Insights [101.64910079905566]
federated unlearning enables the selective removal of data from models trained in federated systems.
This paper examines existing federated unlearning approaches, examining their algorithmic efficiency, impact on model accuracy, and effectiveness in preserving privacy.
We propose the OpenFederatedUnlearning framework, a unified benchmark for evaluating federated unlearning methods.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - Auditing and Generating Synthetic Data with Controllable Trust Trade-offs [54.262044436203965]
We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models.
It focuses on preventing bias and discrimination, ensures fidelity to the source data, assesses utility, robustness, and privacy preservation.
We demonstrate the framework's effectiveness by auditing various generative models across diverse use cases.
arXiv Detail & Related papers (2023-04-21T09:03:18Z) - A Survey of Secure Computation Using Trusted Execution Environments [80.58996305474842]
This article provides a systematic review and comparison of TEE-based secure computation protocols.
We first propose a taxonomy that classifies secure computation protocols into three major categories, namely secure outsourced computation, secure distributed computation and secure multi-party computation.
Based on these criteria, we review, discuss and compare the state-of-the-art TEE-based secure computation protocols for both general-purpose computation functions and special-purpose ones.
arXiv Detail & Related papers (2023-02-23T16:33:56Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.