XAI-KG: knowledge graph to support XAI and decision-making in
manufacturing
- URL: http://arxiv.org/abs/2105.01929v2
- Date: Thu, 6 May 2021 03:41:32 GMT
- Title: XAI-KG: knowledge graph to support XAI and decision-making in
manufacturing
- Authors: Jo\v{z}e M. Ro\v{z}anec, Patrik Zajec, Klemen Kenda, Inna Novalija,
Bla\v{z} Fortuna, Dunja Mladeni\'c
- Abstract summary: We propose an ontology and knowledge graph to support collecting feedback regarding forecasts, forecast explanations, recommended decision-making options, and user actions.
This way, we provide means to improve forecasting models, explanations, and recommendations of decision-making options.
- Score: 0.5872014229110215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing adoption of artificial intelligence requires accurate
forecasts and means to understand the reasoning of artificial intelligence
models behind such a forecast. Explainable Artificial Intelligence (XAI) aims
to provide cues for why a model issued a certain prediction. Such cues are of
utmost importance to decision-making since they provide insights on the
features that influenced most certain forecasts and let the user decide if the
forecast can be trusted. Though many techniques were developed to explain
black-box models, little research was done on assessing the quality of those
explanations and their influence on decision-making. We propose an ontology and
knowledge graph to support collecting feedback regarding forecasts, forecast
explanations, recommended decision-making options, and user actions. This way,
we provide means to improve forecasting models, explanations, and
recommendations of decision-making options. We tailor the knowledge graph for
the domain of demand forecasting and validate it on real-world data.
Related papers
- XForecast: Evaluating Natural Language Explanations for Time Series Forecasting [72.57427992446698]
Time series forecasting aids decision-making, especially for stakeholders who rely on accurate predictions.
Traditional explainable AI (XAI) methods, which underline feature or temporal importance, often require expert knowledge.
evaluating forecast NLEs is difficult due to the complex causal relationships in time series data.
arXiv Detail & Related papers (2024-10-18T05:16:39Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Predictability and Comprehensibility in Post-Hoc XAI Methods: A
User-Centered Analysis [6.606409729669314]
Post-hoc explainability methods aim to clarify predictions of black-box machine learning models.
We conduct a user study to evaluate comprehensibility and predictability in two widely used tools: LIME and SHAP.
We find that the comprehensibility of SHAP is significantly reduced when explanations are provided for samples near a model's decision boundary.
arXiv Detail & Related papers (2023-09-21T11:54:20Z) - Characterizing the contribution of dependent features in XAI methods [6.990173577370281]
We propose a proxy that modifies the outcome of any XAI feature ranking method allowing to account for the dependency among the predictors.
The proposed approach has the advantage of being model-agnostic as well as simple to calculate the impact of each predictor in the model in presence of collinearity.
arXiv Detail & Related papers (2023-04-04T11:25:57Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Enriching Artificial Intelligence Explanations with Knowledge Fragments [0.415623340386296]
This research builds explanations considering feature rankings for a particular forecast, enriching them with media news entries, datasets' metadata, and entries from the Google Knowledge Graph.
We compare two approaches (embeddings-based and semantic-based) on a real-world use case regarding demand forecasting.
arXiv Detail & Related papers (2022-04-12T07:19:30Z) - Finding Useful Predictions by Meta-gradient Descent to Improve
Decision-making [1.384055225262046]
We focus on predictions expressed as General Value Functions: temporally extended estimates of the accumulation of a future signal.
One challenge is determining from the infinitely many predictions that the agent could possibly make which might support decision-making.
By learning, rather than manually specifying these predictions, we enable the agent to identify useful predictions in a self-supervised manner.
arXiv Detail & Related papers (2021-11-18T20:17:07Z) - Semantic XAI for contextualized demand forecasting explanations [0.9137554315375922]
The paper proposes a novel architecture for explainable AI based on semantic technologies and AI.
We tailor the architecture for the domain of demand forecasting and validate it on a real-world case study.
arXiv Detail & Related papers (2021-04-01T13:08:53Z) - When Does Uncertainty Matter?: Understanding the Impact of Predictive
Uncertainty in ML Assisted Decision Making [68.19284302320146]
We carry out user studies to assess how people with differing levels of expertise respond to different types of predictive uncertainty.
We found that showing posterior predictive distributions led to smaller disagreements with the ML model's predictions.
This suggests that posterior predictive distributions can potentially serve as useful decision aids which should be used with caution and take into account the type of distribution and the expertise of the human.
arXiv Detail & Related papers (2020-11-12T02:23:53Z) - Forethought and Hindsight in Credit Assignment [62.05690959741223]
We work to understand the gains and peculiarities of planning employed as forethought via forward models or as hindsight operating with backward models.
We investigate the best use of models in planning, primarily focusing on the selection of states in which predictions should be (re)-evaluated.
arXiv Detail & Related papers (2020-10-26T16:00:47Z) - Evaluations and Methods for Explanation through Robustness Analysis [117.7235152610957]
We establish a novel set of evaluation criteria for such feature based explanations by analysis.
We obtain new explanations that are loosely necessary and sufficient for a prediction.
We extend the explanation to extract the set of features that would move the current prediction to a target class.
arXiv Detail & Related papers (2020-05-31T05:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.