TILT: A GDPR-Aligned Transparency Information Language and Toolkit for
Practical Privacy Engineering
- URL: http://arxiv.org/abs/2012.10431v1
- Date: Fri, 18 Dec 2020 18:45:04 GMT
- Title: TILT: A GDPR-Aligned Transparency Information Language and Toolkit for
Practical Privacy Engineering
- Authors: Elias Gr\"unewald and Frank Pallas
- Abstract summary: TILT is a transparency information language and toolkit designed to represent and process transparency information.
We provide a detailed analysis of transparency obligations to identify the required for a formal transparency language.
On this basis, we specify our formal language and present a respective, fully implemented toolkit.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present TILT, a transparency information language and
toolkit explicitly designed to represent and process transparency information
in line with the requirements of the GDPR and allowing for a more automated and
adaptive use of such information than established, legalese data protection
policies do.
We provide a detailed analysis of transparency obligations from the GDPR to
identify the expressiveness required for a formal transparency language
intended to meet respective legal requirements. In addition, we identify a set
of further, non-functional requirements that need to be met to foster practical
adoption in real-world (web) information systems engineering. On this basis, we
specify our formal language and present a respective, fully implemented toolkit
around it. We then evaluate the practical applicability of our language and
toolkit and demonstrate the additional prospects it unlocks through two
different use cases: a) the inter-organizational analysis of personal
data-related practices allowing, for instance, to uncover data sharing networks
based on explicitly announced transparency information and b) the presentation
of formally represented transparency information to users through novel, more
comprehensible, and potentially adaptive user interfaces, heightening data
subjects' actual informedness about data-related practices and, thus, their
sovereignty.
Altogether, our transparency information language and toolkit allow -
differently from previous work - to express transparency information in line
with actual legal requirements and practices of modern (web) information
systems engineering and thereby pave the way for a multitude of novel
possibilities to heighten transparency and user sovereignty in practice.
Related papers
- TRACE: TRansformer-based Attribution using Contrastive Embeddings in LLMs [50.259001311894295]
We propose a novel TRansformer-based Attribution framework using Contrastive Embeddings called TRACE.
We show that TRACE significantly improves the ability to attribute sources accurately, making it a valuable tool for enhancing the reliability and trustworthiness of large language models.
arXiv Detail & Related papers (2024-07-06T07:19:30Z) - Extending Business Process Management for Regulatory Transparency [0.0]
We bridge the gap between business processes and application systems by providing a plug-in extension to BPMN featuring regulatory transparency information.
We leverage process mining techniques to discover and analyze personal data flows in business processes.
arXiv Detail & Related papers (2024-06-14T12:08:34Z) - DSDL: Data Set Description Language for Bridging Modalities and Tasks in AI Data [50.88106211204689]
In the era of artificial intelligence, the diversity of data modalities and annotation formats often renders data unusable directly.
This article introduces a framework that aims to simplify dataset processing by providing a unified standard for AI datasets.
The standardized specifications of DSDL reduce the workload for users in data dissemination, processing, and usage.
arXiv Detail & Related papers (2024-05-28T16:07:45Z) - Empowering Prior to Court Legal Analysis: A Transparent and Accessible Dataset for Defensive Statement Classification and Interpretation [5.646219481667151]
This paper introduces a novel dataset tailored for classification of statements made during police interviews, prior to court proceedings.
We introduce a fine-tuned DistilBERT model that achieves state-of-the-art performance in distinguishing truthful from deceptive statements.
We also present an XAI interface that empowers both legal professionals and non-specialists to interact with and benefit from our system.
arXiv Detail & Related papers (2024-05-17T11:22:27Z) - Pre-trained Text-to-Image Diffusion Models Are Versatile Representation Learners for Control [73.6361029556484]
Embodied AI agents require a fine-grained understanding of the physical world mediated through visual and language inputs.
We consider pre-trained text-to-image diffusion models, which are explicitly optimized to generate images from text prompts.
We show that Stable Control Representations enable learning policies that exhibit state-of-the-art performance on OVMM, a difficult open-vocabulary navigation benchmark.
arXiv Detail & Related papers (2024-05-09T15:39:54Z) - Towards Cross-Provider Analysis of Transparency Information for Data
Protection [0.0]
This paper presents a novel approach to enable large-scale transparency information analysis across service providers.
We provide the general approach for advanced transparency information analysis, an open source architecture and implementation in the form of a queryable analysis platform.
Future work can build upon our contributions to gain more insights into so-far hidden data-sharing practices.
arXiv Detail & Related papers (2023-09-01T10:36:09Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - Enabling Versatile Privacy Interfaces Using Machine-Readable
Transparency Information [0.0]
We argue that privacy shall incorporate the context of display, personal preferences, and individual competences of data subjects.
We provide a general model of how transparency information can be provided from a data controller to data subjects.
We show how transparency can be enhanced using machine-readable transparency information and how data controllers can meet respective regulatory obligations.
arXiv Detail & Related papers (2023-02-21T20:40:26Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - Dimensions of Transparency in NLP Applications [64.16277166331298]
Broader transparency in descriptions of and communication regarding AI systems is widely considered desirable.
Previous work has suggested that a trade-off exists between greater system transparency and user confusion.
arXiv Detail & Related papers (2021-01-02T11:46:17Z) - The SPECIAL-K Personal Data Processing Transparency and Compliance
Platform [0.1385411134620987]
SPECIAL EU H 2020 project can be used to represent data policies and data and events sharing.
System can verify that data processing and sharing complies with the data subjects consent.
arXiv Detail & Related papers (2020-01-26T14:30:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.