"Computer Says No": Algorithmic Decision Support and Organisational
Responsibility
- URL: http://arxiv.org/abs/2110.11037v2
- Date: Thu, 23 Jun 2022 08:05:22 GMT
- Title: "Computer Says No": Algorithmic Decision Support and Organisational
Responsibility
- Authors: Angelika Adensamer, Rita Gsenger, Lukas Daniel Klausner
- Abstract summary: Algorithmic decision support is increasingly used in a whole array of different contexts and structures.
Its use raises questions, among others, about accountability, transparency and responsibility.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithmic decision support is increasingly used in a whole array of
different contexts and structures in various areas of society, influencing many
people's lives. Its use raises questions, among others, about accountability,
transparency and responsibility. While there is substantial research on the
issue of algorithmic systems and responsibility in general, there is little to
no prior research on organisational responsibility and its attribution. Our
article aims to fill that gap; we give a brief overview of the central issues
connected to ADS, responsibility and decision-making in organisational contexts
and identify open questions and research gaps. Furthermore, we describe a set
of guidelines and a complementary digital tool to assist practitioners in
mapping responsibility when introducing ADS within their organisational
context.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Analysing and Organising Human Communications for AI Fairness-Related Decisions: Use Cases from the Public Sector [0.0]
Communication issues between diverse stakeholders can lead to misinterpretation and misuse of AI algorithms.
We conduct interviews with practitioners working on algorithmic systems in the public sector.
We identify key elements of communication processes that underlie fairness-related human decisions.
arXiv Detail & Related papers (2024-03-20T14:20:42Z) - Responsible AI Governance: A Systematic Literature Review [8.318630741859113]
This paper aims to examine the existing literature on AI Governance.
The focus of this study is to analyse the literature to answer key questions: WHO is accountable for AI systems' governance, WHAT elements are being governed, WHEN governance occurs within the AI development life cycle, and HOW it is executed through various mechanisms like frameworks, tools, standards, policies, or models.
The findings of this study provides a foundational basis for future research and development of comprehensive governance models that align with RAI principles.
arXiv Detail & Related papers (2023-12-18T05:22:36Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - ChoiceMates: Supporting Unfamiliar Online Decision-Making with
Multi-Agent Conversational Interactions [58.71970923420007]
We present ChoiceMates, a system that enables conversations with a dynamic set of LLM-powered agents.
Agents, as opinionated personas, flexibly join the conversation, not only providing responses but also conversing among themselves to elicit each agent's preferences.
Our study (n=36) comparing ChoiceMates to conventional web search and single-agent showed that ChoiceMates was more helpful in discovering, diving deeper, and managing information compared to Web with higher confidence.
arXiv Detail & Related papers (2023-10-02T16:49:39Z) - Unravelling Responsibility for AI [0.8836921728313208]
It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems.
This paper draws upon central distinctions in philosophy and law to clarify the concept of responsibility for AI.
arXiv Detail & Related papers (2023-08-04T13:12:17Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Wer ist schuld, wenn Algorithmen irren? Entscheidungsautomatisierung,
Organisationen und Verantwortung [0.0]
Algorithmic decision support (ADS) is increasingly used in a whole array of different contexts and structures.
Our article aims to give a brief overview of the central issues connected to ADS, responsibility and decision-making in organisational contexts.
We describe a set of guidelines and a complementary digital tool to assist practitioners in mapping responsibility when introducing ADS.
arXiv Detail & Related papers (2022-07-21T13:45:10Z) - Improving Human-AI Partnerships in Child Welfare: Understanding Worker
Practices, Challenges, and Desires for Algorithmic Decision Support [37.03030554731032]
We present findings from a series of interviews at a child welfare agency, to understand how they currently make AI-assisted child maltreatment screening decisions.
We observe how workers' reliance upon the ADS is guided by (1) their knowledge of rich, contextual information beyond what the AI model captures, (2) their beliefs about the ADS's capabilities and limitations relative to their own, and (4) awareness of misalignments between algorithmic predictions and their own decision-making objectives.
arXiv Detail & Related papers (2022-04-05T16:10:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.