Reasons, Values, Stakeholders: A Philosophical Framework for Explainable
Artificial Intelligence
- URL: http://arxiv.org/abs/2103.00752v1
- Date: Mon, 1 Mar 2021 04:50:31 GMT
- Title: Reasons, Values, Stakeholders: A Philosophical Framework for Explainable
Artificial Intelligence
- Authors: Atoosa Kasirzadeh
- Abstract summary: This paper offers a multi-faceted framework that brings more conceptual precision to the present debate.
It identifies the types of explanations that are most pertinent to artificial intelligence predictions.
It also recognizes the relevance and importance of social and ethical values for the evaluation of these explanations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The societal and ethical implications of the use of opaque artificial
intelligence systems for consequential decisions, such as welfare allocation
and criminal justice, have generated a lively debate among multiple stakeholder
groups, including computer scientists, ethicists, social scientists, policy
makers, and end users. However, the lack of a common language or a
multi-dimensional framework to appropriately bridge the technical, epistemic,
and normative aspects of this debate prevents the discussion from being as
productive as it could be. Drawing on the philosophical literature on the
nature and value of explanations, this paper offers a multi-faceted framework
that brings more conceptual precision to the present debate by (1) identifying
the types of explanations that are most pertinent to artificial intelligence
predictions, (2) recognizing the relevance and importance of social and ethical
values for the evaluation of these explanations, and (3) demonstrating the
importance of these explanations for incorporating a diversified approach to
improving the design of truthful algorithmic ecosystems. The proposed
philosophical framework thus lays the groundwork for establishing a pertinent
connection between the technical and ethical aspects of artificial intelligence
systems.
Related papers
- On the Multiple Roles of Ontologies in Explainable AI [0.32634122554913997]
This paper discusses the different roles that explicit knowledge, in particular, can play in Explainable AI.
We consider three main perspectives in which reference modelling, common-sense reasoning, and knowledge refinement and complexity management.
arXiv Detail & Related papers (2023-11-08T15:57:26Z) - A Unifying Framework for Learning Argumentation Semantics [50.69905074548764]
We present a novel framework, which uses an Inductive Logic Programming approach to learn the acceptability semantics for several abstract and structured argumentation frameworks in an interpretable way.
Our framework outperforms existing argumentation solvers, thus opening up new future research directions in the area of formal argumentation and human-machine dialogues.
arXiv Detail & Related papers (2023-10-18T20:18:05Z) - A multidomain relational framework to guide institutional AI research
and adoption [0.0]
We argue that research efforts aimed at understanding the implications of adopting AI tend to prioritize only a handful of ideas.
We propose a simple policy and research design tool in the form of a conceptual framework to organize terms across fields.
arXiv Detail & Related papers (2023-03-17T16:33:01Z) - Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication [5.742215677251865]
We apply social systems theory to highlight challenges in explainable artificial intelligence.
We aim to reinvigorate the technical research in the direction of interactive and iterative explainers.
arXiv Detail & Related papers (2023-02-07T13:31:02Z) - A.I. Robustness: a Human-Centered Perspective on Technological
Challenges and Opportunities [8.17368686298331]
Robustness of Artificial Intelligence (AI) systems remains elusive and constitutes a key issue that impedes large-scale adoption.
We introduce three concepts to organize and describe the literature both from a fundamental and applied point of view.
We highlight the central role of humans in evaluating and enhancing AI robustness, considering the necessary knowledge humans can provide.
arXiv Detail & Related papers (2022-10-17T10:00:51Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Contextualizing Artificially Intelligent Morality: A Meta-Ethnography of
Top-Down, Bottom-Up, and Hybrid Models for Theoretical and Applied Ethics in
Artificial Intelligence [0.0]
In this meta-ethnography, we explore three different angles of ethical artificial intelligence (AI) design implementation.
The novel contribution to this framework is the political angle, which constitutes ethics in AI either being determined by corporations and governments and imposed through policies or law (coming from the top)
There is a focus on reinforcement learning as an example of a bottom-up applied technical approach and AI ethics principles as a practical top-down approach.
arXiv Detail & Related papers (2022-04-15T18:47:49Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.