On the Multiple Roles of Ontologies in Explainable AI
- URL: http://arxiv.org/abs/2311.04778v1
- Date: Wed, 8 Nov 2023 15:57:26 GMT
- Title: On the Multiple Roles of Ontologies in Explainable AI
- Authors: Roberto Confalonieri and Giancarlo Guizzardi
- Abstract summary: This paper discusses the different roles that explicit knowledge, in particular, can play in Explainable AI.
We consider three main perspectives in which reference modelling, common-sense reasoning, and knowledge refinement and complexity management.
- Score: 0.32634122554913997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper discusses the different roles that explicit knowledge, in
particular ontologies, can play in Explainable AI and in the development of
human-centric explainable systems and intelligible explanations. We consider
three main perspectives in which ontologies can contribute significantly,
namely reference modelling, common-sense reasoning, and knowledge refinement
and complexity management. We overview some of the existing approaches in the
literature, and we position them according to these three proposed
perspectives. The paper concludes by discussing what challenges still need to
be addressed to enable ontology-based approaches to explanation and to evaluate
their human-understandability and effectiveness.
Related papers
- A Mechanistic Explanatory Strategy for XAI [0.0]
This paper outlines a mechanistic strategy for explaining the functional organization of deep learning systems.
According to the mechanistic approach, the explanation of opaque AI systems involves identifying mechanisms that drive decision-making.
This research suggests that a systematic approach to studying model organization can reveal elements that simpler (or ''more modest'') explainability techniques might miss.
arXiv Detail & Related papers (2024-11-02T18:30:32Z) - Reasoning with Natural Language Explanations [15.281385727331473]
Explanation constitutes an archetypal feature of human rationality, underpinning learning and generalisation.
An increasing amount of research in Natural Language Inference (NLI) has started reconsidering the role that explanations play in learning and inference.
arXiv Detail & Related papers (2024-10-05T13:15:24Z) - Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing [51.524108608250074]
Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing.
We perform a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches.
We also give a detailed outlook on the challenges and promising research directions.
arXiv Detail & Related papers (2024-02-21T13:19:58Z) - A Survey of Reasoning with Foundation Models [235.7288855108172]
Reasoning plays a pivotal role in various real-world settings such as negotiation, medical diagnosis, and criminal investigation.
We introduce seminal foundation models proposed or adaptable for reasoning.
We then delve into the potential future directions behind the emergence of reasoning abilities within foundation models.
arXiv Detail & Related papers (2023-12-17T15:16:13Z) - Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication [5.742215677251865]
We apply social systems theory to highlight challenges in explainable artificial intelligence.
We aim to reinvigorate the technical research in the direction of interactive and iterative explainers.
arXiv Detail & Related papers (2023-02-07T13:31:02Z) - A.I. Robustness: a Human-Centered Perspective on Technological
Challenges and Opportunities [8.17368686298331]
Robustness of Artificial Intelligence (AI) systems remains elusive and constitutes a key issue that impedes large-scale adoption.
We introduce three concepts to organize and describe the literature both from a fundamental and applied point of view.
We highlight the central role of humans in evaluating and enhancing AI robustness, considering the necessary knowledge humans can provide.
arXiv Detail & Related papers (2022-10-17T10:00:51Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Scientia Potentia Est -- On the Role of Knowledge in Computational
Argumentation [52.903665881174845]
We propose a pyramid of types of knowledge required in computational argumentation.
We briefly discuss the state of the art on the role and integration of these types in the field.
arXiv Detail & Related papers (2021-07-01T08:12:41Z) - Reasons, Values, Stakeholders: A Philosophical Framework for Explainable
Artificial Intelligence [0.0]
This paper offers a multi-faceted framework that brings more conceptual precision to the present debate.
It identifies the types of explanations that are most pertinent to artificial intelligence predictions.
It also recognizes the relevance and importance of social and ethical values for the evaluation of these explanations.
arXiv Detail & Related papers (2021-03-01T04:50:31Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.