Semantic XAI for contextualized demand forecasting explanations
- URL: http://arxiv.org/abs/2104.00452v1
- Date: Thu, 1 Apr 2021 13:08:53 GMT
- Title: Semantic XAI for contextualized demand forecasting explanations
- Authors: Jo\v{z}e M. Ro\v{z}anec and Dunja Mladeni\'c
- Abstract summary: The paper proposes a novel architecture for explainable AI based on semantic technologies and AI.
We tailor the architecture for the domain of demand forecasting and validate it on a real-world case study.
- Score: 0.9137554315375922
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper proposes a novel architecture for explainable AI based on semantic
technologies and AI. We tailor the architecture for the domain of demand
forecasting and validate it on a real-world case study. The provided
explanations combine concepts describing features relevant to a particular
forecast, related media events, and metadata regarding external datasets of
interest. The knowledge graph provides concepts that convey feature information
at a higher abstraction level. By using them, explanations do not expose
sensitive details regarding the demand forecasting models. The explanations
also emphasize actionable dimensions where suitable. We link domain knowledge,
forecasted values, and forecast explanations in a Knowledge Graph. The ontology
and dataset we developed for this use case are publicly available for further
research.
Related papers
- LinkLogic: A New Method and Benchmark for Explainable Knowledge Graph Predictions [0.5999777817331317]
We present an in-depth exploration of a simple link prediction explanation method we call LinkLogic.
We construct the first-ever link prediction explanation benchmark, based on family structures present in the FB13 dataset.
arXiv Detail & Related papers (2024-06-02T20:22:22Z) - Counterfactual Explanations for Deep Learning-Based Traffic Forecasting [42.31238891397725]
This study aims to leverage an Explainable AI approach, counterfactual explanations, to enhance the explainability and usability of deep learning-based traffic forecasting models.
The study first implements a deep learning model to predict traffic speed based on historical traffic data and contextual variables.
Counterfactual explanations are then used to illuminate how alterations in these input variables affect predicted outcomes.
arXiv Detail & Related papers (2024-05-01T11:26:31Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Robust Ante-hoc Graph Explainer using Bilevel Optimization [0.7999703756441758]
We propose RAGE, a novel and flexible ante-hoc explainer for graph neural networks.
RAGE can effectively identify molecular substructures that contain the full information needed for prediction.
Our experiments on various molecular classification tasks show that RAGE explanations are better than existing post-hoc and ante-hoc approaches.
arXiv Detail & Related papers (2023-05-25T05:50:38Z) - Citation Trajectory Prediction via Publication Influence Representation
Using Temporal Knowledge Graph [52.07771598974385]
Existing approaches mainly rely on mining temporal and graph data from academic articles.
Our framework is composed of three modules: difference-preserved graph embedding, fine-grained influence representation, and learning-based trajectory calculation.
Experiments are conducted on both the APS academic dataset and our contributed AIPatent dataset.
arXiv Detail & Related papers (2022-10-02T07:43:26Z) - Enriching Artificial Intelligence Explanations with Knowledge Fragments [0.415623340386296]
This research builds explanations considering feature rankings for a particular forecast, enriching them with media news entries, datasets' metadata, and entries from the Google Knowledge Graph.
We compare two approaches (embeddings-based and semantic-based) on a real-world use case regarding demand forecasting.
arXiv Detail & Related papers (2022-04-12T07:19:30Z) - XAI-KG: knowledge graph to support XAI and decision-making in
manufacturing [0.5872014229110215]
We propose an ontology and knowledge graph to support collecting feedback regarding forecasts, forecast explanations, recommended decision-making options, and user actions.
This way, we provide means to improve forecasting models, explanations, and recommendations of decision-making options.
arXiv Detail & Related papers (2021-05-05T08:42:07Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.