Wer ist schuld, wenn Algorithmen irren? Entscheidungsautomatisierung,
Organisationen und Verantwortung
- URL: http://arxiv.org/abs/2207.10479v1
- Date: Thu, 21 Jul 2022 13:45:10 GMT
- Title: Wer ist schuld, wenn Algorithmen irren? Entscheidungsautomatisierung,
Organisationen und Verantwortung
- Authors: Angelika Adensamer and Rita Gsenger and Lukas Daniel Klausner
- Abstract summary: Algorithmic decision support (ADS) is increasingly used in a whole array of different contexts and structures.
Our article aims to give a brief overview of the central issues connected to ADS, responsibility and decision-making in organisational contexts.
We describe a set of guidelines and a complementary digital tool to assist practitioners in mapping responsibility when introducing ADS.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithmic decision support (ADS) is increasingly used in a whole array of
different contexts and structures in various areas of society, influencing many
people's lives. Its use raises questions, among others, about accountability,
transparency and responsibility. Our article aims to give a brief overview of
the central issues connected to ADS, responsibility and decision-making in
organisational contexts and identify open questions and research gaps.
Furthermore, we describe a set of guidelines and a complementary digital tool
to assist practitioners in mapping responsibility when introducing ADS within
their organisational context.
--
Algorithmenunterst\"utzte Entscheidungsfindung (algorithmic decision support,
ADS) kommt in verschiedenen Kontexten und Strukturen vermehrt zum Einsatz und
beeinflusst in diversen gesellschaftlichen Bereichen das Leben vieler Menschen.
Ihr Einsatz wirft einige Fragen auf, unter anderem zu den Themen Rechenschaft,
Transparenz und Verantwortung. Im Folgenden m\"ochten wir einen \"Uberblick
\"uber die wichtigsten Fragestellungen rund um ADS, Verantwortung und
Entscheidungsfindung in organisationalen Kontexten geben und einige offene
Fragen und Forschungsl\"ucken aufzeigen. Weiters beschreiben wir als konkrete
Hilfestellung f\"ur die Praxis einen von uns entwickelten Leitfaden samt
erg\"anzendem digitalem Tool, welches Anwender:innen insbesondere bei der
Verortung und Zuordnung von Verantwortung bei der Nutzung von ADS in
organisationalen Kontexten helfen soll.
Related papers
- Open Domain Question Answering with Conflicting Contexts [55.739842087655774]
We find that as much as 25% of unambiguous, open domain questions can lead to conflicting contexts when retrieved using Google Search.
We ask our annotators to provide explanations for their selections of correct answers.
arXiv Detail & Related papers (2024-10-16T07:24:28Z) - How to Engage Your Readers? Generating Guiding Questions to Promote Active Reading [60.19226384241482]
We introduce GuidingQ, a dataset of 10K in-text questions from textbooks and scientific articles.
We explore various approaches to generate such questions using language models.
We conduct a human study to understand the implication of such questions on reading comprehension.
arXiv Detail & Related papers (2024-07-19T13:42:56Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Analysing and Organising Human Communications for AI Fairness-Related Decisions: Use Cases from the Public Sector [0.0]
Communication issues between diverse stakeholders can lead to misinterpretation and misuse of AI algorithms.
We conduct interviews with practitioners working on algorithmic systems in the public sector.
We identify key elements of communication processes that underlie fairness-related human decisions.
arXiv Detail & Related papers (2024-03-20T14:20:42Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - ChoiceMates: Supporting Unfamiliar Online Decision-Making with
Multi-Agent Conversational Interactions [58.71970923420007]
We present ChoiceMates, a system that enables conversations with a dynamic set of LLM-powered agents.
Agents, as opinionated personas, flexibly join the conversation, not only providing responses but also conversing among themselves to elicit each agent's preferences.
Our study (n=36) comparing ChoiceMates to conventional web search and single-agent showed that ChoiceMates was more helpful in discovering, diving deeper, and managing information compared to Web with higher confidence.
arXiv Detail & Related papers (2023-10-02T16:49:39Z) - 'Team-in-the-loop': Ostrom's IAD framework 'rules in use' to map and measure contextual impacts of AI [0.0]
This article explores how the 'rules in use' from Ostrom's Institutional Analysis and Development Framework (IAD) can be developed as a context analysis approach for AI.
arXiv Detail & Related papers (2023-03-24T14:01:00Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - "There Is Not Enough Information": On the Effects of Explanations on
Perceptions of Informational Fairness and Trustworthiness in Automated
Decision-Making [0.0]
Automated decision systems (ADS) are increasingly used for consequential decision-making.
We conduct a human subject study to assess people's perceptions of informational fairness.
A comprehensive analysis of qualitative feedback sheds light on people's desiderata for explanations.
arXiv Detail & Related papers (2022-05-11T20:06:03Z) - Improving Human-AI Partnerships in Child Welfare: Understanding Worker
Practices, Challenges, and Desires for Algorithmic Decision Support [37.03030554731032]
We present findings from a series of interviews at a child welfare agency, to understand how they currently make AI-assisted child maltreatment screening decisions.
We observe how workers' reliance upon the ADS is guided by (1) their knowledge of rich, contextual information beyond what the AI model captures, (2) their beliefs about the ADS's capabilities and limitations relative to their own, and (4) awareness of misalignments between algorithmic predictions and their own decision-making objectives.
arXiv Detail & Related papers (2022-04-05T16:10:49Z) - "Computer Says No": Algorithmic Decision Support and Organisational
Responsibility [0.0]
Algorithmic decision support is increasingly used in a whole array of different contexts and structures.
Its use raises questions, among others, about accountability, transparency and responsibility.
arXiv Detail & Related papers (2021-10-21T10:24:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.