Explainable Human-in-the-loop Dynamic Data-Driven Digital Twins
- URL: http://arxiv.org/abs/2207.09106v1
- Date: Tue, 19 Jul 2022 07:15:12 GMT
- Title: Explainable Human-in-the-loop Dynamic Data-Driven Digital Twins
- Authors: Nan Zhang, Rami Bahsoon, Nikos Tziritas, Georgios Theodoropoulos
- Abstract summary: Digital Twins (DT) are essentially Dynamic Data-driven models that serve as real-time symbiotic "virtual replicas" of real-world systems.
This paper is an approach to harnessing explainability in human-in-the-loop DDDAS and DT systems, leveraging bidirectional symbiotic sensing feedback.
- Score: 6.657586324950896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Digital Twins (DT) are essentially Dynamic Data-driven models that serve as
real-time symbiotic "virtual replicas" of real-world systems. DT can leverage
fundamentals of Dynamic Data-Driven Applications Systems (DDDAS) bidirectional
symbiotic sensing feedback loops for its continuous updates. Sensing loops can
consequently steer measurement, analysis and reconfiguration aimed at more
accurate modelling and analysis in DT. The reconfiguration decisions can be
autonomous or interactive, keeping human-in-the-loop. The trustworthiness of
these decisions can be hindered by inadequate explainability of the rationale,
and utility gained in implementing the decision for the given situation among
alternatives. Additionally, different decision-making algorithms and models
have varying complexity, quality and can result in different utility gained for
the model. The inadequacy of explainability can limit the extent to which
humans can evaluate the decisions, often leading to updates which are unfit for
the given situation, erroneous, compromising the overall accuracy of the model.
The novel contribution of this paper is an approach to harnessing
explainability in human-in-the-loop DDDAS and DT systems, leveraging
bidirectional symbiotic sensing feedback. The approach utilises interpretable
machine learning and goal modelling to explainability, and considers trade-off
analysis of utility gained. We use examples from smart warehousing to
demonstrate the approach.
Related papers
- Identifiable Representation and Model Learning for Latent Dynamic Systems [0.0]
We study the problem of identifiable representation and model learning for latent dynamic systems.
We prove that, for linear or affine nonlinear latent dynamic systems, it is possible to identify the representations up to scaling and determine the models up to some simple transformations.
arXiv Detail & Related papers (2024-10-23T13:55:42Z) - Large Language Models for Explainable Decisions in Dynamic Digital Twins [3.179208155005568]
Dynamic data-driven Digital Twins (DDTs) can enable informed decision-making and provide an optimisation platform for the underlying system.
This paper explores using large language models (LLMs) to provide an explainability platform for DDTs.
It generates natural language explanations of the system's decision-making by leveraging domain-specific knowledge bases.
arXiv Detail & Related papers (2024-05-23T10:32:38Z) - Causal Graph ODE: Continuous Treatment Effect Modeling in Multi-agent
Dynamical Systems [70.84976977950075]
Real-world multi-agent systems are often dynamic and continuous, where the agents co-evolve and undergo changes in their trajectories and interactions over time.
We propose a novel model that captures the continuous interaction among agents using a Graph Neural Network (GNN) as the ODE function.
The key innovation of our model is to learn time-dependent representations of treatments and incorporate them into the ODE function, enabling precise predictions of potential outcomes.
arXiv Detail & Related papers (2024-02-29T23:07:07Z) - MONAL: Model Autophagy Analysis for Modeling Human-AI Interactions [11.972017738888825]
We propose Model Autophagy Analysis (MONAL) for large models' self-consumption explanation.
MONAL employs two distinct autophagous loops to elucidate the suppression of human-generated information in the exchange between human and AI systems.
We evaluate the capacities of generated models as both creators and disseminators of information.
arXiv Detail & Related papers (2024-02-17T13:02:54Z) - Interpretable Imitation Learning with Dynamic Causal Relations [65.18456572421702]
We propose to expose captured knowledge in the form of a directed acyclic causal graph.
We also design this causal discovery process to be state-dependent, enabling it to model the dynamics in latent causal graphs.
The proposed framework is composed of three parts: a dynamic causal discovery module, a causality encoding module, and a prediction module, and is trained in an end-to-end manner.
arXiv Detail & Related papers (2023-09-30T20:59:42Z) - Human Trajectory Forecasting with Explainable Behavioral Uncertainty [63.62824628085961]
Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars.
Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well.
We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods.
arXiv Detail & Related papers (2023-07-04T16:45:21Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Capturing Actionable Dynamics with Structured Latent Ordinary
Differential Equations [68.62843292346813]
We propose a structured latent ODE model that captures system input variations within its latent representation.
Building on a static variable specification, our model learns factors of variation for each input to the system, thus separating the effects of the system inputs in the latent space.
arXiv Detail & Related papers (2022-02-25T20:00:56Z) - Using Shape Metrics to Describe 2D Data Points [0.0]
We propose to use shape metrics to describe 2D data to help make analyses more explainable and interpretable.
This is particularly important in applications in the medical community where the right to explainability' is crucial.
arXiv Detail & Related papers (2022-01-27T23:28:42Z) - Provably Robust Model-Centric Explanations for Critical Decision-Making [14.367217955827002]
We show that data-centric methods may yield brittle explanations of limited practical utility.
The model-centric framework, however, can offer actionable insights into risks of using AI models in practice.
arXiv Detail & Related papers (2021-10-26T18:05:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.