Reviewable Automated Decision-Making: A Framework for Accountable
Algorithmic Systems
- URL: http://arxiv.org/abs/2102.04201v2
- Date: Wed, 10 Feb 2021 11:48:42 GMT
- Title: Reviewable Automated Decision-Making: A Framework for Accountable
Algorithmic Systems
- Authors: Jennifer Cobbe, Michelle Seng Ah Lee, Jatinder Singh
- Abstract summary: This paper introduces reviewability as a framework for improving the accountability of automated and algorithmic decision-making (ADM)
We draw on an understanding of ADM as a socio-technical process involving both human and technical elements, beginning before a decision is made and extending beyond the decision itself.
We argue that a reviewability framework, drawing on administrative law's approach to reviewing human decision-making, offers a practical way forward towards more a more holistic and legally-relevant form of accountability for ADM.
- Score: 1.7403133838762448
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces reviewability as a framework for improving the
accountability of automated and algorithmic decision-making (ADM) involving
machine learning. We draw on an understanding of ADM as a socio-technical
process involving both human and technical elements, beginning before a
decision is made and extending beyond the decision itself. While explanations
and other model-centric mechanisms may assist some accountability concerns,
they often provide insufficient information of these broader ADM processes for
regulatory oversight and assessments of legal compliance. Reviewability
involves breaking down the ADM process into technical and organisational
elements to provide a systematic framework for determining the contextually
appropriate record-keeping mechanisms to facilitate meaningful review - both of
individual decisions and of the process as a whole. We argue that a
reviewability framework, drawing on administrative law's approach to reviewing
human decision-making, offers a practical way forward towards more a more
holistic and legally-relevant form of accountability for ADM.
Related papers
- Aiding Humans in Financial Fraud Decision Making: Toward an XAI-Visualization Framework [6.040452803295326]
Financial fraud investigators face the challenge of manually synthesizing vast amounts of unstructured information.
Current Visual Analytics systems primarily support isolated aspects of this process.
We propose a framework where the VA system supports decision makers throughout all stages of financial fraud investigation.
arXiv Detail & Related papers (2024-08-26T18:10:07Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Accountability in Offline Reinforcement Learning: Explaining Decisions
with a Corpus of Examples [70.84093873437425]
This paper introduces the Accountable Offline Controller (AOC) that employs the offline dataset as the Decision Corpus.
AOC operates effectively in low-data scenarios, can be extended to the strictly offline imitation setting, and displays qualities of both conservation and adaptability.
We assess AOC's performance in both simulated and real-world healthcare scenarios, emphasizing its capability to manage offline control tasks with high levels of performance while maintaining accountability.
arXiv Detail & Related papers (2023-10-11T17:20:32Z) - Rational Decision-Making Agent with Internalized Utility Judgment [91.80700126895927]
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications.
This paper proposes RadAgent, which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning.
Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks.
arXiv Detail & Related papers (2023-08-24T03:11:45Z) - The Conflict Between Explainable and Accountable Decision-Making
Algorithms [10.64167691614925]
Decision-making algorithms are being used in important decisions, such as who should be enrolled in health care programs and be hired.
XAI initiative aims to make algorithms explainable to comply with legal requirements, promote trust, and maintain accountability.
This paper questions whether and to what extent explainability can help solve the responsibility issues posed by autonomous AI systems.
arXiv Detail & Related papers (2022-05-11T07:19:28Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Ethics-Based Auditing of Automated Decision-Making Systems: Intervention
Points and Policy Implications [0.0]
This article outlines the conditions under which ethics-based auditing (EBA) procedures can be feasible and effective in practice.
We frame ADMS as parts of larger socio-technical systems to demonstrate that to be feasible and effective, EBA procedures must link to intervention points.
arXiv Detail & Related papers (2021-11-08T10:57:26Z) - "A cold, technical decision-maker": Can AI provide explainability,
negotiability, and humanity? [47.36687555570123]
We present results of a qualitative study of algorithmic decision-making, comprised of five workshops conducted with a total of 60 participants.
We discuss participants' consideration of humanity in decision-making, and introduce the concept of 'negotiability,' the ability to go beyond formal criteria and work flexibly around the system.
arXiv Detail & Related papers (2020-12-01T22:36:54Z) - Contestable Black Boxes [10.552465253379134]
This paper investigates the type of assurances that are needed in the contesting process when algorithmic black-boxes are involved.
We argue that specialised complementary methodologies to evaluate automated decision-making in the case of a particular decision being contested need to be developed.
arXiv Detail & Related papers (2020-06-09T09:09:00Z) - Closing the AI Accountability Gap: Defining an End-to-End Framework for
Internal Algorithmic Auditing [8.155332346712424]
We introduce a framework for algorithmic auditing that supports artificial intelligence system development end-to-end.
The proposed auditing framework is intended to close the accountability gap in the development and deployment of large-scale artificial intelligence systems.
arXiv Detail & Related papers (2020-01-03T20:19:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.