A Critical Review of Inductive Logic Programming Techniques for
Explainable AI
- URL: http://arxiv.org/abs/2112.15319v1
- Date: Fri, 31 Dec 2021 06:34:32 GMT
- Title: A Critical Review of Inductive Logic Programming Techniques for
Explainable AI
- Authors: Zheng Zhang, Levent Yilmaz and Bo Liu
- Abstract summary: Inductive Logic Programming (ILP) is a subfield of symbolic artificial intelligence.
ILP generates explainable first-order clausal theories from examples and background knowledge.
Existing ILP systems often have a vast solution space, and the induced solutions are very sensitive to noises and disturbances.
- Score: 9.028858411921906
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite recent advances in modern machine learning algorithms, the opaqueness
of their underlying mechanisms continues to be an obstacle in adoption. To
instill confidence and trust in artificial intelligence systems, Explainable
Artificial Intelligence has emerged as a response to improving modern machine
learning algorithms' explainability. Inductive Logic Programming (ILP), a
subfield of symbolic artificial intelligence, plays a promising role in
generating interpretable explanations because of its intuitive logic-driven
framework. ILP effectively leverages abductive reasoning to generate
explainable first-order clausal theories from examples and background
knowledge. However, several challenges in developing methods inspired by ILP
need to be addressed for their successful application in practice. For example,
existing ILP systems often have a vast solution space, and the induced
solutions are very sensitive to noises and disturbances. This survey paper
summarizes the recent advances in ILP and a discussion of statistical
relational learning and neural-symbolic algorithms, which offer synergistic
views to ILP. Following a critical review of the recent advances, we delineate
observed challenges and highlight potential avenues of further ILP-motivated
research toward developing self-explanatory artificial intelligence systems.
Related papers
- A Mechanistic Explanatory Strategy for XAI [0.0]
This paper outlines a mechanistic strategy for explaining the functional organization of deep learning systems.
According to the mechanistic approach, the explanation of opaque AI systems involves identifying mechanisms that drive decision-making.
This research suggests that a systematic approach to studying model organization can reveal elements that simpler (or ''more modest'') explainability techniques might miss.
arXiv Detail & Related papers (2024-11-02T18:30:32Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Explainable Artificial Intelligence Techniques for Accurate Fault Detection and Diagnosis: A Review [0.0]
We review the eXplainable AI (XAI) tools and techniques in this context.
We focus on their role in making AI decision-making transparent, particularly in critical scenarios where humans are involved.
We discuss current limitations and potential future research that aims to balance explainability with model performance.
arXiv Detail & Related papers (2024-04-17T17:49:38Z) - Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing [51.524108608250074]
Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing.
We perform a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches.
We also give a detailed outlook on the challenges and promising research directions.
arXiv Detail & Related papers (2024-02-21T13:19:58Z) - Towards Cognitive AI Systems: a Survey and Prospective on Neuro-Symbolic
AI [33.0761784111292]
Neuro-symbolic AI emerges as a promising paradigm to enhance interpretability, robustness, and trustworthiness.
Recent NSAI systems have demonstrated great potential in collaborative human-AI scenarios with reasoning and cognitive capabilities.
arXiv Detail & Related papers (2024-01-02T05:00:54Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Vision Paper: Causal Inference for Interpretable and Robust Machine
Learning in Mobility Analysis [71.2468615993246]
Building intelligent transportation systems requires an intricate combination of artificial intelligence and mobility analysis.
The past few years have seen rapid development in transportation applications using advanced deep neural networks.
This vision paper emphasizes research challenges in deep learning-based mobility analysis that require interpretability and robustness.
arXiv Detail & Related papers (2022-10-18T17:28:58Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.