Rethinking People Analytics With Inverse Transparency by Design
- URL: http://arxiv.org/abs/2305.09813v2
- Date: Wed, 26 Jul 2023 14:16:35 GMT
- Title: Rethinking People Analytics With Inverse Transparency by Design
- Authors: Valentin Zieglmeier and Alexander Pretschner
- Abstract summary: We propose a new design approach for workforce analytics we refer to as inverse transparency by design.
We find that architectural changes are made without inhibiting core functionality.
We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
- Score: 57.67333075002697
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Employees work in increasingly digital environments that enable advanced
analytics. Yet, they lack oversight over the systems that process their data.
That means that potential analysis errors or hidden biases are hard to uncover.
Recent data protection legislation tries to tackle these issues, but it is
inadequate. It does not prevent data misusage while at the same time stifling
sensible use cases for data.
We think the conflict between data protection and increasingly data-driven
systems should be solved differently. When access to an employees' data is
given, all usages should be made transparent to them, according to the concept
of inverse transparency. This allows individuals to benefit from sensible data
usage while addressing the potentially harmful consequences of data misusage.
To accomplish this, we propose a new design approach for workforce analytics we
refer to as inverse transparency by design.
To understand the developer and user perspectives on the proposal, we conduct
two exploratory studies with students. First, we let small teams of developers
implement analytics tools with inverse transparency by design to uncover how
they judge the approach and how it materializes in their developed tools. We
find that architectural changes are made without inhibiting core functionality.
The developers consider our approach valuable and technically feasible. Second,
we conduct a user study over three months to let participants experience the
provided inverse transparency and reflect on their experience. The study models
a software development workplace where most work processes are already digital.
Participants perceive the transparency as beneficial and feel empowered by it.
They unanimously agree that it would be an improvement for the workplace. We
conclude that inverse transparency by design is a promising approach to realize
accepted and responsible people analytics.
Related papers
- Insights from an experiment crowdsourcing data from thousands of US Amazon users: The importance of transparency, money, and data use [6.794366017852433]
This paper shares an innovative approach to crowdsourcing user data to collect otherwise inaccessible Amazon purchase histories, spanning 5 years, from more than 5000 US users.
We developed a data collection tool that prioritizes participant consent and includes an experimental study design.
Experiment results (N=6325) reveal both monetary incentives and transparency can significantly increase data sharing.
arXiv Detail & Related papers (2024-04-19T20:45:19Z) - User Strategization and Trustworthy Algorithms [81.82279667028423]
We show that user strategization can actually help platforms in the short term.
We then show that it corrupts platforms' data and ultimately hurts their ability to make counterfactual decisions.
arXiv Detail & Related papers (2023-12-29T16:09:42Z) - The Inverse Transparency Toolchain: A Fully Integrated and Quickly
Deployable Data Usage Logging Infrastructure [0.0]
Inverse transparency is created by making all usages of employee data visible to them.
For research and teaching contexts that integrate inverse transparency, creating this required infrastructure can be challenging.
The Inverse Transparency Toolchain presents a flexible solution for such scenarios.
arXiv Detail & Related papers (2023-08-08T16:04:48Z) - Black-box Dataset Ownership Verification via Backdoor Watermarking [67.69308278379957]
We formulate the protection of released datasets as verifying whether they are adopted for training a (suspicious) third-party model.
We propose to embed external patterns via backdoor watermarking for the ownership verification to protect them.
Specifically, we exploit poison-only backdoor attacks ($e.g.$, BadNets) for dataset watermarking and design a hypothesis-test-guided method for dataset verification.
arXiv Detail & Related papers (2022-08-04T05:32:20Z) - Representative & Fair Synthetic Data [68.8204255655161]
We present a framework to incorporate fairness constraints into the self-supervised learning process.
We generate a representative as well as fair version of the UCI Adult census data set.
We consider representative & fair synthetic data a promising future building block to teach algorithms not on historic worlds, but rather on the worlds that we strive to live in.
arXiv Detail & Related papers (2021-04-07T09:19:46Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Explainable Patterns: Going from Findings to Insights to Support Data
Analytics Democratization [60.18814584837969]
We present Explainable Patterns (ExPatt), a new framework to support lay users in exploring and creating data storytellings.
ExPatt automatically generates plausible explanations for observed or selected findings using an external (textual) source of information.
arXiv Detail & Related papers (2021-01-19T16:13:44Z) - Algorithmic Transparency with Strategic Users [9.289838852590732]
We show that even the predictive power of machine learning algorithms may increase if the firm makes them transparent.
We show that, in some cases, even the predictive power of machine learning algorithms may increase if the firm makes them transparent.
arXiv Detail & Related papers (2020-08-21T03:10:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.