The Conflict Between Explainable and Accountable Decision-Making
Algorithms
- URL: http://arxiv.org/abs/2205.05306v1
- Date: Wed, 11 May 2022 07:19:28 GMT
- Title: The Conflict Between Explainable and Accountable Decision-Making
Algorithms
- Authors: Gabriel Lima, Nina Grgi\'c-Hla\v{c}a, Jin Keun Jeong, Meeyoung Cha
- Abstract summary: Decision-making algorithms are being used in important decisions, such as who should be enrolled in health care programs and be hired.
XAI initiative aims to make algorithms explainable to comply with legal requirements, promote trust, and maintain accountability.
This paper questions whether and to what extent explainability can help solve the responsibility issues posed by autonomous AI systems.
- Score: 10.64167691614925
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decision-making algorithms are being used in important decisions, such as who
should be enrolled in health care programs and be hired. Even though these
systems are currently deployed in high-stakes scenarios, many of them cannot
explain their decisions. This limitation has prompted the Explainable
Artificial Intelligence (XAI) initiative, which aims to make algorithms
explainable to comply with legal requirements, promote trust, and maintain
accountability. This paper questions whether and to what extent explainability
can help solve the responsibility issues posed by autonomous AI systems. We
suggest that XAI systems that provide post-hoc explanations could be seen as
blameworthy agents, obscuring the responsibility of developers in the
decision-making process. Furthermore, we argue that XAI could result in
incorrect attributions of responsibility to vulnerable stakeholders, such as
those who are subjected to algorithmic decisions (i.e., patients), due to a
misguided perception that they have control over explainable algorithms. This
conflict between explainability and accountability can be exacerbated if
designers choose to use algorithms and patients as moral and legal scapegoats.
We conclude with a set of recommendations for how to approach this tension in
the socio-technical process of algorithmic decision-making and a defense of
hard regulation to prevent designers from escaping responsibility.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Assistive AI for Augmenting Human Decision-making [3.379906135388703]
The paper shows how AI can assist in the complex process of decision-making while maintaining human oversight.
Central to our framework are the principles of privacy, accountability, and credibility.
arXiv Detail & Related papers (2024-10-18T10:16:07Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Transcending XAI Algorithm Boundaries through End-User-Inspired Design [27.864338632191608]
Lacking explainability-focused functional support for end users may hinder the safe and responsible use of AI in high-stakes domains.
Our work shows that grounding the technical problem in end users' use of XAI can inspire new research questions.
Such end-user-inspired research questions have the potential to promote social good by democratizing AI and ensuring the responsible use of AI in critical domains.
arXiv Detail & Related papers (2022-08-18T09:44:51Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Explaining Black-Box Algorithms Using Probabilistic Contrastive
Counterfactuals [7.727206277914709]
We propose a principled causality-based approach for explaining black-box decision-making systems.
We show how such counterfactuals can quantify the direct and indirect influences of a variable on decisions made by an algorithm.
We show how such counterfactuals can provide actionable recourse for individuals negatively affected by the algorithm's decision.
arXiv Detail & Related papers (2021-03-22T16:20:21Z) - Conceptualising Contestability: Perspectives on Contesting Algorithmic
Decisions [18.155121103400333]
We describe and analyse the perspectives of people and organisations who made submissions in response to Australia's proposed AI Ethics Framework'
Our findings reveal that while the nature of contestability is disputed, it is seen as a way to protect individuals, and it resembles contestability in relation to human decision-making.
arXiv Detail & Related papers (2021-02-23T05:13:18Z) - Contestable Black Boxes [10.552465253379134]
This paper investigates the type of assurances that are needed in the contesting process when algorithmic black-boxes are involved.
We argue that specialised complementary methodologies to evaluate automated decision-making in the case of a particular decision being contested need to be developed.
arXiv Detail & Related papers (2020-06-09T09:09:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.