Ethics-Based Auditing of Automated Decision-Making Systems: Intervention
Points and Policy Implications
- URL: http://arxiv.org/abs/2111.04380v1
- Date: Mon, 8 Nov 2021 10:57:26 GMT
- Title: Ethics-Based Auditing of Automated Decision-Making Systems: Intervention
Points and Policy Implications
- Authors: Jakob Mokander, Maria Axente
- Abstract summary: This article outlines the conditions under which ethics-based auditing (EBA) procedures can be feasible and effective in practice.
We frame ADMS as parts of larger socio-technical systems to demonstrate that to be feasible and effective, EBA procedures must link to intervention points.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Organisations increasingly use automated decision-making systems (ADMS) to
inform decisions that affect humans and their environment. While the use of
ADMS can improve the accuracy and efficiency of decision-making processes, it
is also coupled with ethical challenges. Unfortunately, the governance
mechanisms currently used to oversee human decision-making often fail when
applied to ADMS. In previous work, we proposed that ethics-based auditing
(EBA), i.e. a structured process by which ADMS are assessed for consistency
with relevant principles or norms, can (a) help organisations verify claims
about their ADMS and (b) provide decision-subjects with justifications for the
outputs produced by ADMS. In this article, we outline the conditions under
which EBA procedures can be feasible and effective in practice. First, we argue
that EBA is best understood as a 'soft' yet 'formal' governance mechanism. This
implies that the main responsibility of auditors should be to spark ethical
deliberation at key intervention points throughout the software development
process and ensure that there is sufficient documentation to respond to
potential inquiries. Second, we frame ADMS as parts of larger socio-technical
systems to demonstrate that to be feasible and effective, EBA procedures must
link to intervention points that span all levels of organisational governance
and all phases of the software lifecycle. The main function of EBA should
therefore be to inform, formalise, assess, and interlink existing governance
structures. Finally, we discuss the policy implications of our findings. To
support the emergence of feasible and effective EBA procedures, policymakers
and regulators could provide standardised reporting formats, facilitate
knowledge exchange, provide guidance on how to resolve normative tensions, and
create an independent body to oversee EBA of ADMS.
Related papers
- A five-layer framework for AI governance: integrating regulation, standards, and certification [0.6875312133832078]
The governance of artificial iintelligence (AI) systems requires a structured approach that connects high-level regulatory principles with practical implementation.<n>Existing frameworks lack clarity on how regulations translate into conformity mechanisms, leading to gaps in compliance and enforcement.<n>A five-layer AI governance framework is proposed, spanning from broad regulatory mandates to specific standards, assessment methodologies, and certification processes.
arXiv Detail & Related papers (2025-09-14T16:19:08Z) - Can AI be Auditable? [3.0260353258798625]
Auditability is the capacity of AI systems to be independently assessed for compliance with ethical, legal, and technical standards.<n>The chapter explores how auditability is being formalized through emerging regulatory frameworks, such as the EU AI Act.<n>It analyzes the challenges facing AI auditability, including technical opacity, inconsistent documentation practices, lack of standardized audit tools and metrics.
arXiv Detail & Related papers (2025-08-30T18:03:20Z) - Toward a Theory of Agents as Tool-Use Decision-Makers [89.26889709510242]
We argue that true autonomy requires agents to be grounded in a coherent epistemic framework that governs what they know, what they need to know, and how to acquire that knowledge efficiently.<n>We propose a unified theory that treats internal reasoning and external actions as equivalent epistemic tools, enabling agents to systematically coordinate introspection and interaction.<n>This perspective shifts the design of agents from mere action executors to knowledge-driven intelligence systems, offering a principled path toward building foundation agents capable of adaptive, efficient, and goal-directed behavior.
arXiv Detail & Related papers (2025-06-01T07:52:16Z) - MSDA: Combining Pseudo-labeling and Self-Supervision for Unsupervised Domain Adaptation in ASR [59.83547898874152]
We introduce a sample-efficient, two-stage adaptation approach that integrates self-supervised learning with semi-supervised techniques.<n>MSDA is designed to enhance the robustness and generalization of ASR models.<n>We demonstrate that Meta PL can be applied effectively to ASR tasks, achieving state-of-the-art results.
arXiv Detail & Related papers (2025-05-30T14:46:05Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.
Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Semantic Integrity Constraints: Declarative Guardrails for AI-Augmented Data Processing Systems [39.23499993745249]
We introduce semantic integrity constraints (SICs) for specifying and enforcing correctness conditions over LLM outputs in semantic queries.<n>SICs generalize traditional database integrity constraints to semantic settings, supporting common types of constraints, such as grounding, soundness, and exclusion.<n>We present a system design for integrating SICs into query planning and runtime and discuss its realization in AI-augmented DPSs.
arXiv Detail & Related papers (2025-03-01T19:59:25Z) - RegNLP in Action: Facilitating Compliance Through Automated Information Retrieval and Answer Generation [51.998738311700095]
Regulatory documents, characterized by their length, complexity and frequent updates, are challenging to interpret.
RegNLP is a multidisciplinary subfield aimed at simplifying access to and interpretation of regulatory rules and obligations.
ObliQA dataset contains 27,869 questions derived from the Abu Dhabi Global Markets (ADGM) financial regulation document collection.
arXiv Detail & Related papers (2024-09-09T14:44:19Z) - Operationalising AI governance through ethics-based auditing: An industry case study [0.0]
Ethics based auditing (EBA) is a structured process whereby an entitys past or present behaviour is assessed for consistency with moral principles or norms.
This article provides a detailed description of the organisational context in which EBA procedures must be integrated to be feasible and effective.
arXiv Detail & Related papers (2024-07-07T12:22:38Z) - ABI Approach: Automatic Bias Identification in Decision-Making Under Risk based in an Ontology of Behavioral Economics [46.57327530703435]
Risk seeking preferences for losses, driven by biases such as loss aversion, pose challenges and can result in severe negative consequences.
This research introduces the ABI approach, a novel solution designed to support organizational decision-makers by automatically identifying and explaining risk seeking preferences.
arXiv Detail & Related papers (2024-05-22T23:53:46Z) - The Foundations of Computational Management: A Systematic Approach to
Task Automation for the Integration of Artificial Intelligence into Existing
Workflows [55.2480439325792]
This article introduces Computational Management, a systematic approach to task automation.
The article offers three easy step-by-step procedures to begin the process of implementing AI within a workflow.
arXiv Detail & Related papers (2024-02-07T01:45:14Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Accountability in Offline Reinforcement Learning: Explaining Decisions
with a Corpus of Examples [70.84093873437425]
This paper introduces the Accountable Offline Controller (AOC) that employs the offline dataset as the Decision Corpus.
AOC operates effectively in low-data scenarios, can be extended to the strictly offline imitation setting, and displays qualities of both conservation and adaptability.
We assess AOC's performance in both simulated and real-world healthcare scenarios, emphasizing its capability to manage offline control tasks with high levels of performance while maintaining accountability.
arXiv Detail & Related papers (2023-10-11T17:20:32Z) - Rational Decision-Making Agent with Internalized Utility Judgment [91.80700126895927]
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications.
This paper proposes RadAgent, which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning.
Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks.
arXiv Detail & Related papers (2023-08-24T03:11:45Z) - Organizational Governance of Emerging Technologies: AI Adoption in
Healthcare [43.02293389682218]
The Health AI Partnership aims to better define the requirements for adequate organizational governance of AI systems in healthcare settings.
This is one of the most detailed qualitative analyses to date of the current governance structures and processes involved in AI adoption by health systems in the United States.
We hope these findings can inform future efforts to build capabilities to promote the safe, effective, and responsible adoption of emerging technologies in healthcare.
arXiv Detail & Related papers (2023-04-25T18:30:47Z) - Exploring the Relevance of Data Privacy-Enhancing Technologies for AI
Governance Use Cases [1.5293427903448022]
It is useful to view different AI governance objectives as a system of information flows.
The importance of interoperability between these different AI governance solutions becomes clear.
arXiv Detail & Related papers (2023-03-15T21:56:59Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Ethics-Based Auditing of Automated Decision-Making Systems: Nature,
Scope, and Limitations [1.2599533416395765]
Delegating tasks to automated decision-making systems (ADMS) can improve efficiency and enable new solutions.
For example, ADMS may produce discriminatory outcomes, violate individual privacy, and undermine human self-determination.
New governance mechanisms are thus needed that help organisations design and deploy ADMS in ways that are ethical.
arXiv Detail & Related papers (2021-10-21T08:51:28Z) - Reviewable Automated Decision-Making: A Framework for Accountable
Algorithmic Systems [1.7403133838762448]
This paper introduces reviewability as a framework for improving the accountability of automated and algorithmic decision-making (ADM)
We draw on an understanding of ADM as a socio-technical process involving both human and technical elements, beginning before a decision is made and extending beyond the decision itself.
We argue that a reviewability framework, drawing on administrative law's approach to reviewing human decision-making, offers a practical way forward towards more a more holistic and legally-relevant form of accountability for ADM.
arXiv Detail & Related papers (2021-01-26T18:15:34Z) - Closing the AI Accountability Gap: Defining an End-to-End Framework for
Internal Algorithmic Auditing [8.155332346712424]
We introduce a framework for algorithmic auditing that supports artificial intelligence system development end-to-end.
The proposed auditing framework is intended to close the accountability gap in the development and deployment of large-scale artificial intelligence systems.
arXiv Detail & Related papers (2020-01-03T20:19:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.