Five policy uses of algorithmic transparency and explainability
- URL: http://arxiv.org/abs/2302.03080v2
- Date: Fri, 15 Sep 2023 03:46:30 GMT
- Title: Five policy uses of algorithmic transparency and explainability
- Authors: Matthew O'Shaughnessy
- Abstract summary: We provide case studies illustrating five ways in which algorithmic transparency and explainability have been used in policy settings.
Specific requirements for explanations; in nonbinding guidelines for internal governance of algorithms; in regulations applicable to highly regulated settings.
Case studies span a spectrum from precise requirements for specific types of explanations to nonspecific requirements focused on broader notions of transparency.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The notion that algorithmic systems should be "transparent" and "explainable"
is common in the many statements of consensus principles developed by
governments, companies, and advocacy organizations. But what exactly do policy
and legal actors want from these technical concepts, and how do their
desiderata compare with the explainability techniques developed in the machine
learning literature? In hopes of better connecting the policy and technical
communities, we provide case studies illustrating five ways in which
algorithmic transparency and explainability have been used in policy settings:
specific requirements for explanations; in nonbinding guidelines for internal
governance of algorithms; in regulations applicable to highly regulated
settings; in guidelines meant to increase the utility of legal liability for
algorithms; and broad requirements for model and data transparency. The case
studies span a spectrum from precise requirements for specific types of
explanations to nonspecific requirements focused on broader notions of
transparency, illustrating the diverse needs, constraints, and capacities of
various policy actors and contexts. Drawing on these case studies, we discuss
promising ways in which transparency and explanation could be used in policy,
as well as common factors limiting policymakers' use of algorithmic
explainability. We conclude with recommendations for researchers and
policymakers.
Related papers
- Policy-as-Prompt: Rethinking Content Moderation in the Age of Large Language Models [10.549072684871478]
This paper formalises the emerging policy-as-prompt framework and identifies five key challenges across four domains.
It lays the groundwork for future exploration of scalable and adaptive content moderation systems in digital ecosystems.
arXiv Detail & Related papers (2025-02-25T23:15:16Z) - Explainability-Driven Quality Assessment for Rule-Based Systems [0.7303392100830282]
This paper introduces an explanation framework designed to enhance the quality of rules in knowledge-based reasoning systems.
It generates explanations of rule inferences and leverages human interpretation to refine rules.
Its practicality is demonstrated through a use case in finance.
arXiv Detail & Related papers (2025-02-03T11:26:09Z) - Few-shot Policy (de)composition in Conversational Question Answering [54.259440408606515]
We propose a neuro-symbolic framework to detect policy compliance using large language models (LLMs) in a few-shot setting.
We show that our approach soundly reasons about policy compliance conversations by extracting sub-questions to be answered, assigning truth values from contextual information, and explicitly producing a set of logic statements from the given policies.
We apply this approach to the popular PCD and conversational machine reading benchmark, ShARC, and show competitive performance with no task-specific finetuning.
arXiv Detail & Related papers (2025-01-20T08:40:15Z) - Explainability in AI Based Applications: A Framework for Comparing Different Techniques [2.5874041837241304]
In business applications, the challenge lies in selecting an appropriate explainability method that balances comprehensibility with accuracy.
This paper proposes a novel method for the assessment of the agreement of different explainability techniques.
By providing a practical framework for understanding the agreement of diverse explainability techniques, our research aims to facilitate the broader integration of interpretable AI systems in business applications.
arXiv Detail & Related papers (2024-10-28T09:45:34Z) - Leveraging Counterfactual Paths for Contrastive Explanations of POMDP Policies [2.4332936182093197]
XAI aims to reduce confusion and foster trust in systems by providing explanations of agent behavior.
POMDPs provide a flexible framework capable of reasoning over transition and state uncertainty.
This work investigates the use of user-provided counterfactuals to generate contrastive explanations of POMDP policies.
arXiv Detail & Related papers (2024-03-28T18:19:38Z) - Explainability in AI Policies: A Critical Review of Communications,
Reports, Regulations, and Standards in the EU, US, and UK [1.5039745292757671]
We perform the first thematic and gap analysis of policies and standards on explainability in the EU, US, and UK.
We find that policies are often informed by coarse notions and requirements for explanations.
We propose recommendations on how to address explainability in regulations for AI systems.
arXiv Detail & Related papers (2023-04-20T07:53:07Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - Bounded Robustness in Reinforcement Learning via Lexicographic
Objectives [54.00072722686121]
Policy robustness in Reinforcement Learning may not be desirable at any cost.
We study how policies can be maximally robust to arbitrary observational noise.
We propose a robustness-inducing scheme, applicable to any policy algorithm, that trades off expected policy utility for robustness.
arXiv Detail & Related papers (2022-09-30T08:53:18Z) - Transparency, Compliance, And Contestability When Code Is(n't) Law [91.85674537754346]
Both technical security mechanisms and legal processes serve as mechanisms to deal with misbehaviour according to a set of norms.
While they share general similarities, there are also clear differences in how they are defined, act, and the effect they have on subjects.
This paper considers the similarities and differences between both types of mechanisms as ways of dealing with misbehaviour.
arXiv Detail & Related papers (2022-05-08T18:03:07Z) - Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial
Contexts [12.552080951754963]
Existing and planned legislation stipulates various obligations to provide information about machine learning algorithms.
Many researchers suggest using post-hoc explanation algorithms for this purpose.
We show that post-hoc explanation algorithms are unsuitable to achieve the law's objectives.
arXiv Detail & Related papers (2022-01-25T13:12:02Z) - Is Disentanglement all you need? Comparing Concept-based &
Disentanglement Approaches [24.786152654589067]
We give an overview of concept-based explanations and disentanglement approaches.
We show that state-of-the-art approaches from both classes can be data inefficient, sensitive to the specific nature of the classification/regression task, or sensitive to the employed concept representation.
arXiv Detail & Related papers (2021-04-14T15:06:34Z) - Dimensions of Transparency in NLP Applications [64.16277166331298]
Broader transparency in descriptions of and communication regarding AI systems is widely considered desirable.
Previous work has suggested that a trade-off exists between greater system transparency and user confusion.
arXiv Detail & Related papers (2021-01-02T11:46:17Z) - Privacy-Constrained Policies via Mutual Information Regularized Policy Gradients [54.98496284653234]
We consider the task of training a policy that maximizes reward while minimizing disclosure of certain sensitive state variables through the actions.
We solve this problem by introducing a regularizer based on the mutual information between the sensitive state and the actions.
We develop a model-based estimator for optimization of privacy-constrained policies.
arXiv Detail & Related papers (2020-12-30T03:22:35Z) - Preventing Imitation Learning with Adversarial Policy Ensembles [79.81807680370677]
Imitation learning can reproduce policies by observing experts, which poses a problem regarding policy privacy.
How can we protect against external observers cloning our proprietary policies?
We introduce a new reinforcement learning framework, where we train an ensemble of near-optimal policies.
arXiv Detail & Related papers (2020-01-31T01:57:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.