Enriching Artificial Intelligence Explanations with Knowledge Fragments
- URL: http://arxiv.org/abs/2204.05579v1
- Date: Tue, 12 Apr 2022 07:19:30 GMT
- Title: Enriching Artificial Intelligence Explanations with Knowledge Fragments
- Authors: Jo\v{z}e M. Ro\v{z}anec, Elena Trajkova, Inna Novalija, Patrik Zajec,
Klemen Kenda, Bla\v{z} Fortuna, Dunja Mladeni\'c
- Abstract summary: This research builds explanations considering feature rankings for a particular forecast, enriching them with media news entries, datasets' metadata, and entries from the Google Knowledge Graph.
We compare two approaches (embeddings-based and semantic-based) on a real-world use case regarding demand forecasting.
- Score: 0.415623340386296
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence models are increasingly used in manufacturing to
inform decision-making. Responsible decision-making requires accurate forecasts
and an understanding of the models' behavior. Furthermore, the insights into
models' rationale can be enriched with domain knowledge. This research builds
explanations considering feature rankings for a particular forecast, enriching
them with media news entries, datasets' metadata, and entries from the Google
Knowledge Graph. We compare two approaches (embeddings-based and
semantic-based) on a real-world use case regarding demand forecasting.
Related papers
- XForecast: Evaluating Natural Language Explanations for Time Series Forecasting [72.57427992446698]
Time series forecasting aids decision-making, especially for stakeholders who rely on accurate predictions.
Traditional explainable AI (XAI) methods, which underline feature or temporal importance, often require expert knowledge.
evaluating forecast NLEs is difficult due to the complex causal relationships in time series data.
arXiv Detail & Related papers (2024-10-18T05:16:39Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Evaluating Explainability in Machine Learning Predictions through Explainer-Agnostic Metrics [0.0]
We develop six distinct model-agnostic metrics designed to quantify the extent to which model predictions can be explained.
These metrics measure different aspects of model explainability, ranging from local importance, global importance, and surrogate predictions.
We demonstrate the practical utility of these metrics on classification and regression tasks, and integrate these metrics into an existing Python package for public use.
arXiv Detail & Related papers (2023-02-23T15:28:36Z) - Knowledge-based XAI through CBR: There is more to explanations than
models can tell [0.0]
We propose to use domain knowledge to complement the data used by data-centric artificial intelligence agents.
We formulate knowledge-based explainable artificial intelligence as a supervised data classification problem aligned with the CBR methodology.
arXiv Detail & Related papers (2021-08-23T19:01:43Z) - XAI-KG: knowledge graph to support XAI and decision-making in
manufacturing [0.5872014229110215]
We propose an ontology and knowledge graph to support collecting feedback regarding forecasts, forecast explanations, recommended decision-making options, and user actions.
This way, we provide means to improve forecasting models, explanations, and recommendations of decision-making options.
arXiv Detail & Related papers (2021-05-05T08:42:07Z) - Semantic XAI for contextualized demand forecasting explanations [0.9137554315375922]
The paper proposes a novel architecture for explainable AI based on semantic technologies and AI.
We tailor the architecture for the domain of demand forecasting and validate it on a real-world case study.
arXiv Detail & Related papers (2021-04-01T13:08:53Z) - Actionable Cognitive Twins for Decision Making in Manufacturing [1.372026330898297]
Actionable Cognitive Twins are the next generation Digital Twins enhanced with cognitive capabilities.
Knowledge graph provides semantic descriptions and contextualization of the production lines and processes.
System thinking approach is proposed to design and develop a knowledge graph and build an actionable twin.
arXiv Detail & Related papers (2021-03-23T21:32:07Z) - Forethought and Hindsight in Credit Assignment [62.05690959741223]
We work to understand the gains and peculiarities of planning employed as forethought via forward models or as hindsight operating with backward models.
We investigate the best use of models in planning, primarily focusing on the selection of states in which predictions should be (re)-evaluated.
arXiv Detail & Related papers (2020-10-26T16:00:47Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.