Data Protection Impact Assessment for the Corona App
- URL: http://arxiv.org/abs/2101.07292v1
- Date: Mon, 18 Jan 2021 19:23:30 GMT
- Title: Data Protection Impact Assessment for the Corona App
- Authors: Kirsten Bock, Christian R. K\"uhne, Rainer M\"uhlhoff, M\v{e}to R.
Ost, J\"org Pohle, Rainer Rehak
- Abstract summary: SARS-CoV-2 started spreading in Europe in early 2020 and there has been a strong call for technical solutions to combat or contain the pandemic.
There has been a strong call for technical solutions with contact tracing apps at the heart of debates.
The EU's General Daten Protection Regulation (DPIA) requires controllers to carry out a data protection assessment.
We present a scientific DPIA which thoroughly examines three published contact tracing app designs that are considered to be the most "privacy-friendly"
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Since SARS-CoV-2 started spreading in Europe in early 2020, there has been a
strong call for technical solutions to combat or contain the pandemic, with
contact tracing apps at the heart of the debates. The EU's General Daten
Protection Regulation (GDPR) requires controllers to carry out a data
protection impact assessment (DPIA) where their data processing is likely to
result in a high risk to the rights and freedoms (Art. 35 GDPR). A DPIA is a
structured risk analysis that identifies and evaluates possible consequences of
data processing relevant to fundamental rights and describes the measures
envisaged to address these risks or expresses the inability to do so. Based on
the Standard Data Protection Model (SDM), we present a scientific DPIA which
thoroughly examines three published contact tracing app designs that are
considered to be the most "privacy-friendly": PEPP-PT, DP-3T and a concept
summarized by Chaos Computer Club member Linus Neumann, all of which process
personal health data. The DPIA starts with an analysis of the processing
context and some expected use cases. Then, the processing activities are
described by defining a realistic processing purpose. This is followed by the
legal assessment and threshold analysis. Finally, we analyse the weak points,
the risks and determine appropriate protective measures. We show that even
decentralized implementations involve numerous serious weaknesses and risks.
Legally, consent is unfit as legal ground hence data must be processed based on
a law. We also found that measures to realize the rights of data subjects and
affected people are not sufficient. Last but not least, we show that
anonymization must be understood as a continuous process, which aims at
separating the personal reference and is based on a mix of legal,
organizational and technical measures. All currently available proposals lack
such an explicit separation process.
Related papers
- A Personal data Value at Risk Approach [0.0]
This paper proposes a quantitative approach to data protection risk-based compliance from a data controllers perspective.
It aims at proposing a mindset change, where data protection impact assessments can be improved by using data protection analytics, quantitative risk analysis, and calibrating expert opinions.
arXiv Detail & Related papers (2024-11-05T16:09:28Z) - Evaluating Copyright Takedown Methods for Language Models [100.38129820325497]
Language models (LMs) derive their capabilities from extensive training on diverse data, including potentially copyrighted material.
This paper introduces the first evaluation of the feasibility and side effects of copyright takedowns for LMs.
We examine several strategies, including adding system prompts, decoding-time filtering interventions, and unlearning approaches.
arXiv Detail & Related papers (2024-06-26T18:09:46Z) - Modelling Technique for GDPR-compliance: Toward a Comprehensive Solution [0.0]
New data protection legislation in the EU/UK has come into force.
Existing threat modelling techniques are not designed to model compliance.
We propose a new data flow integrated with principles of knowledge base for non-compliance threats.
arXiv Detail & Related papers (2024-04-22T08:41:43Z) - Towards an Enforceable GDPR Specification [49.1574468325115]
Privacy by Design (PbD) is prescribed by modern privacy regulations such as the EU's.
One emerging technique to realize PbD is enforcement (RE)
We present a set of requirements and an iterative methodology for creating formal specifications of legal provisions.
arXiv Detail & Related papers (2024-02-27T09:38:51Z) - When is Off-Policy Evaluation (Reward Modeling) Useful in Contextual Bandits? A Data-Centric Perspective [64.73162159837956]
evaluating the value of a hypothetical target policy with only a logged dataset is important but challenging.
We propose DataCOPE, a data-centric framework for evaluating a target policy given a dataset.
Our empirical analysis of DataCOPE in the logged contextual bandit settings using healthcare datasets confirms its ability to evaluate both machine-learning and human expert policies.
arXiv Detail & Related papers (2023-11-23T17:13:37Z) - A Multi-solution Study on GDPR AI-enabled Completeness Checking of DPAs [3.1002416427168304]
General Data Protection Regulation (DPA) requires a data processing agreement (DPA) which regulates processing and ensures personal data remains protected.
Checking completeness of DPA according to prerequisite provisions is therefore an essential to ensure that requirements are complete.
We propose an automation strategy to address the completeness checking of DPAs against stipulated provisions.
arXiv Detail & Related papers (2023-11-23T10:05:52Z) - EU law and emotion data [0.0]
Article sheds light on legal implications and challenges surrounding emotion data processing within the EU's legal framework.
We discuss the nuances of different approaches to affective computing and their relevance to the processing of special data.
We highlight some of the consequences, including harm, that processing of emotion data may have for individuals concerned.
arXiv Detail & Related papers (2023-09-19T17:25:02Z) - Stop Uploading Test Data in Plain Text: Practical Strategies for
Mitigating Data Contamination by Evaluation Benchmarks [70.39633252935445]
Data contamination has become prevalent and challenging with the rise of models pretrained on large automatically-crawled corpora.
For closed models, the training data becomes a trade secret, and even for open models, it is not trivial to detect contamination.
We propose three strategies that can make a difference: (1) Test data made public should be encrypted with a public key and licensed to disallow derivative distribution; (2) demand training exclusion controls from closed API holders, and protect your test data by refusing to evaluate without them; and (3) avoid data which appears with its solution on the internet, and release the web-page context of internet-derived
arXiv Detail & Related papers (2023-05-17T12:23:38Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - Privacy Preservation in Federated Learning: An insightful survey from
the GDPR Perspective [10.901568085406753]
Article is dedicated to surveying on the state-of-the-art privacy techniques, which can be employed in Federated learning.
Recent research has demonstrated that retaining data and on computation in FL is not enough for privacy-guarantee.
This is because ML model parameters exchanged between parties in an FL system, which can be exploited in some privacy attacks.
arXiv Detail & Related papers (2020-11-10T21:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.