Wer ist schuld, wenn Algorithmen irren? Entscheidungsautomatisierung,
Organisationen und Verantwortung
- URL: http://arxiv.org/abs/2207.10479v1
- Date: Thu, 21 Jul 2022 13:45:10 GMT
- Title: Wer ist schuld, wenn Algorithmen irren? Entscheidungsautomatisierung,
Organisationen und Verantwortung
- Authors: Angelika Adensamer and Rita Gsenger and Lukas Daniel Klausner
- Abstract summary: Algorithmic decision support (ADS) is increasingly used in a whole array of different contexts and structures.
Our article aims to give a brief overview of the central issues connected to ADS, responsibility and decision-making in organisational contexts.
We describe a set of guidelines and a complementary digital tool to assist practitioners in mapping responsibility when introducing ADS.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithmic decision support (ADS) is increasingly used in a whole array of
different contexts and structures in various areas of society, influencing many
people's lives. Its use raises questions, among others, about accountability,
transparency and responsibility. Our article aims to give a brief overview of
the central issues connected to ADS, responsibility and decision-making in
organisational contexts and identify open questions and research gaps.
Furthermore, we describe a set of guidelines and a complementary digital tool
to assist practitioners in mapping responsibility when introducing ADS within
their organisational context.
--
Algorithmenunterst\"utzte Entscheidungsfindung (algorithmic decision support,
ADS) kommt in verschiedenen Kontexten und Strukturen vermehrt zum Einsatz und
beeinflusst in diversen gesellschaftlichen Bereichen das Leben vieler Menschen.
Ihr Einsatz wirft einige Fragen auf, unter anderem zu den Themen Rechenschaft,
Transparenz und Verantwortung. Im Folgenden m\"ochten wir einen \"Uberblick
\"uber die wichtigsten Fragestellungen rund um ADS, Verantwortung und
Entscheidungsfindung in organisationalen Kontexten geben und einige offene
Fragen und Forschungsl\"ucken aufzeigen. Weiters beschreiben wir als konkrete
Hilfestellung f\"ur die Praxis einen von uns entwickelten Leitfaden samt
erg\"anzendem digitalem Tool, welches Anwender:innen insbesondere bei der
Verortung und Zuordnung von Verantwortung bei der Nutzung von ADS in
organisationalen Kontexten helfen soll.
Related papers
- The "Who'', "What'', and "How'' of Responsible AI Governance: A Systematic Review and Meta-Analysis of (Actor, Stage)-Specific Tools [10.439710801147033]
We present a systematic review and comprehensive meta-analysis of the current state of responsible AI tools.
Our findings reveal significant imbalances across the stakeholder roles and lifecycle stages addressed.
Despite the myriad of frameworks and tools for responsible AI, it remains unclear emphwho within an organization and emphwhen in the AI lifecycle a tool applies.
arXiv Detail & Related papers (2025-02-18T21:31:31Z) - Open Domain Question Answering with Conflicting Contexts [55.739842087655774]
We find that as much as 25% of unambiguous, open domain questions can lead to conflicting contexts when retrieved using Google Search.
We ask our annotators to provide explanations for their selections of correct answers.
arXiv Detail & Related papers (2024-10-16T07:24:28Z) - How to Engage Your Readers? Generating Guiding Questions to Promote Active Reading [60.19226384241482]
We introduce GuidingQ, a dataset of 10K in-text questions from textbooks and scientific articles.
We explore various approaches to generate such questions using language models.
We conduct a human study to understand the implication of such questions on reading comprehension.
arXiv Detail & Related papers (2024-07-19T13:42:56Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Analysing and Organising Human Communications for AI Fairness-Related Decisions: Use Cases from the Public Sector [0.0]
Communication issues between diverse stakeholders can lead to misinterpretation and misuse of AI algorithms.
We conduct interviews with practitioners working on algorithmic systems in the public sector.
We identify key elements of communication processes that underlie fairness-related human decisions.
arXiv Detail & Related papers (2024-03-20T14:20:42Z) - AutoGuide: Automated Generation and Selection of Context-Aware Guidelines for Large Language Model Agents [74.17623527375241]
We introduce a novel framework, called AutoGuide, which automatically generates context-aware guidelines from offline experiences.
As a result, our guidelines facilitate the provision of relevant knowledge for the agent's current decision-making process.
Our evaluation demonstrates that AutoGuide significantly outperforms competitive baselines in complex benchmark domains.
arXiv Detail & Related papers (2024-03-13T22:06:03Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - 'Team-in-the-loop': Ostrom's IAD framework 'rules in use' to map and measure contextual impacts of AI [0.0]
This article explores how the 'rules in use' from Ostrom's Institutional Analysis and Development Framework (IAD) can be developed as a context analysis approach for AI.
arXiv Detail & Related papers (2023-03-24T14:01:00Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Improving Human-AI Partnerships in Child Welfare: Understanding Worker
Practices, Challenges, and Desires for Algorithmic Decision Support [37.03030554731032]
We present findings from a series of interviews at a child welfare agency, to understand how they currently make AI-assisted child maltreatment screening decisions.
We observe how workers' reliance upon the ADS is guided by (1) their knowledge of rich, contextual information beyond what the AI model captures, (2) their beliefs about the ADS's capabilities and limitations relative to their own, and (4) awareness of misalignments between algorithmic predictions and their own decision-making objectives.
arXiv Detail & Related papers (2022-04-05T16:10:49Z) - "Computer Says No": Algorithmic Decision Support and Organisational
Responsibility [0.0]
Algorithmic decision support is increasingly used in a whole array of different contexts and structures.
Its use raises questions, among others, about accountability, transparency and responsibility.
arXiv Detail & Related papers (2021-10-21T10:24:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.