Trustworthy Transparency by Design
- URL: http://arxiv.org/abs/2103.10769v2
- Date: Fri, 19 May 2023 12:33:21 GMT
- Title: Trustworthy Transparency by Design
- Authors: Valentin Zieglmeier and Alexander Pretschner
- Abstract summary: We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
- Score: 57.67333075002697
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Individuals lack oversight over systems that process their data. This can
lead to discrimination and hidden biases that are hard to uncover. Recent data
protection legislation tries to tackle these issues, but it is inadequate. It
does not prevent data misusage while stifling sensible use cases for data. We
think the conflict between data protection and increasingly data-based systems
should be solved differently. When access to data is given, all usages should
be made transparent to the data subjects. This enables their data sovereignty,
allowing individuals to benefit from sensible data usage while addressing
potentially harmful consequences of data misusage. We contribute to this with a
technical concept and an empirical evaluation. First, we conceptualize a
transparency framework for software design, incorporating research on user
trust and experience. Second, we instantiate and empirically evaluate the
framework in a focus group study over three months, centering on the user
perspective. Our transparency framework enables developing software that
incorporates transparency in its design. The evaluation shows that it satisfies
usability and trustworthiness requirements. The provided transparency is
experienced as beneficial and participants feel empowered by it. This shows
that our framework enables Trustworthy Transparency by Design.
Related papers
- Confidential Computing Transparency [7.9699781371465965]
We propose a Confidential Computing Transparency framework with progressive levels of transparency.
This framework goes beyond current measures like open-source code and audits by incorporating accountability for reviewers.
Our tiered approach provides a practical pathway to achieving transparency in complex, real-world systems.
arXiv Detail & Related papers (2024-09-05T17:24:05Z) - AI data transparency: an exploration through the lens of AI incidents [2.255682336735152]
This research explores the status of public documentation about data practices within AI systems generating public concern.
We highlight a need to develop systematic ways of monitoring AI data transparency that account for the diversity of AI system types.
arXiv Detail & Related papers (2024-09-05T07:23:30Z) - Lazy Data Practices Harm Fairness Research [49.02318458244464]
We present a comprehensive analysis of fair ML datasets, demonstrating how unreflective practices hinder the reach and reliability of algorithmic fairness findings.
Our analyses identify three main areas of concern: (1) a textbflack of representation for certain protected attributes in both data and evaluations; (2) the widespread textbf of minorities during data preprocessing; and (3) textbfopaque data processing threatening the generalization of fairness research.
This study underscores the need for a critical reevaluation of data practices in fair ML and offers directions to improve both the sourcing and usage of datasets.
arXiv Detail & Related papers (2024-04-26T09:51:24Z) - Towards Generalizable Data Protection With Transferable Unlearnable
Examples [50.628011208660645]
We present a novel, generalizable data protection method by generating transferable unlearnable examples.
To the best of our knowledge, this is the first solution that examines data privacy from the perspective of data distribution.
arXiv Detail & Related papers (2023-05-18T04:17:01Z) - Rethinking People Analytics With Inverse Transparency by Design [57.67333075002697]
We propose a new design approach for workforce analytics we refer to as inverse transparency by design.
We find that architectural changes are made without inhibiting core functionality.
We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
arXiv Detail & Related papers (2023-05-16T21:37:35Z) - Auditing and Generating Synthetic Data with Controllable Trust Trade-offs [54.262044436203965]
We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models.
It focuses on preventing bias and discrimination, ensures fidelity to the source data, assesses utility, robustness, and privacy preservation.
We demonstrate the framework's effectiveness by auditing various generative models across diverse use cases.
arXiv Detail & Related papers (2023-04-21T09:03:18Z) - Contributing to Accessibility Datasets: Reflections on Sharing Study
Data by Blind People [14.625384963263327]
We present a pair of studies where 13 blind participants engage in data capturing activities.
We see how different factors influence blind participants' willingness to share study data as they assess risk-benefit tradeoffs.
The majority support sharing of their data to improve technology but also express concerns over commercial use, associated metadata, and the lack of transparency about the impact of their data.
arXiv Detail & Related papers (2023-03-09T00:42:18Z) - Explainable Patterns: Going from Findings to Insights to Support Data
Analytics Democratization [60.18814584837969]
We present Explainable Patterns (ExPatt), a new framework to support lay users in exploring and creating data storytellings.
ExPatt automatically generates plausible explanations for observed or selected findings using an external (textual) source of information.
arXiv Detail & Related papers (2021-01-19T16:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.