A Methodology and Software Architecture to Support
Explainability-by-Design
- URL: http://arxiv.org/abs/2206.06251v2
- Date: Thu, 25 May 2023 23:15:35 GMT
- Title: A Methodology and Software Architecture to Support
Explainability-by-Design
- Authors: Trung Dong Huynh, Niko Tsakalakis, Ayah Helal, Sophie
Stalla-Bourdillon, Luc Moreau
- Abstract summary: This paper describes Explainability-by-Design, a holistic methodology characterised by proactive measures to include explanation capability in the design of decision-making systems.
The methodology consists of three phases: (A) Explanation Requirement Analysis, (B) Explanation Technical Design, and (C) Explanation Validation.
It was shown that the approach is tractable in terms of development time, which can be as low as two hours per sentence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Algorithms play a crucial role in many technological systems that control or
affect various aspects of our lives. As a result, providing explanations for
their decisions to address the needs of users and organisations is increasingly
expected by laws, regulations, codes of conduct, and the public. However, as
laws and regulations do not prescribe how to meet such expectations,
organisations are often left to devise their own approaches to explainability,
inevitably increasing the cost of compliance and good governance. Hence, we
envision Explainability-by-Design, a holistic methodology characterised by
proactive measures to include explanation capability in the design of
decision-making systems. The methodology consists of three phases: (A)
Explanation Requirement Analysis, (B) Explanation Technical Design, and (C)
Explanation Validation. This paper describes phase (B), a technical workflow to
implement explanation capability from requirements elicited by domain experts
for a specific application context. Outputs of this phase are a set of
configurations, allowing a reusable explanation service to exploit logs
provided by the target application to create provenance traces of the
application's decisions. The provenance then can be queried to extract relevant
data points, which can be used in explanation plans to construct explanations
personalised to their consumers. Following the workflow, organisations can
design their decision-making systems to produce explanations that meet the
specified requirements. To facilitate the process, we present a software
architecture with reusable components to incorporate the resulting explanation
capability into an application. Finally, we applied the workflow to two
application scenarios and measured the associated development costs. It was
shown that the approach is tractable in terms of development time, which can be
as low as two hours per sentence.
Related papers
- Demystifying Reinforcement Learning in Production Scheduling via Explainable AI [0.7515066610159392]
Deep Reinforcement Learning (DRL) is a frequently employed technique to solve scheduling problems.
Although DRL agents ace at delivering viable results in short computing times, their reasoning remains opaque.
We apply two explainable AI (xAI) frameworks to describe the reasoning behind scheduling decisions of a specialized DRL agent in a flow production.
arXiv Detail & Related papers (2024-08-19T09:39:01Z) - Automated Process Planning Based on a Semantic Capability Model and SMT [50.76251195257306]
In research of manufacturing systems and autonomous robots, the term capability is used for a machine-interpretable specification of a system function.
We present an approach that combines these two topics: starting from a semantic capability model, an AI planning problem is automatically generated.
arXiv Detail & Related papers (2023-12-14T10:37:34Z) - Fountain -- an intelligent contextual assistant combining knowledge
representation and language models for manufacturing risk identification [7.599675376503671]
We developed Fountain as a contextual assistant integrated in the deviation management workflow.
We present the nuances of selecting and adapting pretrained language models for an engineering domain.
We demonstrate that the model adaptation is feasible using moderate computational infrastructure already available to most engineering teams in manufacturing organizations.
arXiv Detail & Related papers (2023-08-01T08:12:43Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - A taxonomy of explanations to support Explainability-by-Design [0.0]
We present a taxonomy of explanations that was developed as part of a holistic 'Explainability-by-Design' approach.
The taxonomy was built with a view to produce explanations for a wide range of requirements stemming from a variety of regulatory frameworks or policies.
It is used as a stand-alone classifier of explanations conceived as detective controls, in order to aid supportive automated compliance strategies.
arXiv Detail & Related papers (2022-06-09T11:59:42Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - A Research Agenda for Artificial Intelligence in the Field of Flexible
Production Systems [53.47496941841855]
Production companies face problems when it comes to quickly adapting their production control to fluctuating demands or changing requirements.
Control approaches aiming to encapsulate production functions in the sense of services have shown to be promising in order to increase flexibility of Cyber-Physical Production Systems.
But an existing challenge of such approaches is finding production plans based on provided functionalities for a set of requirements, especially when there is no direct (i.e., syntactic) match between demanded and provided functions.
arXiv Detail & Related papers (2021-12-31T14:38:31Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Inverse Reinforcement Learning of Autonomous Behaviors Encoded as
Weighted Finite Automata [18.972270182221262]
This paper presents a method for learning logical task specifications and cost functions from demonstrations.
We employ a spectral learning approach to extract a weighted finite automaton (WFA), approximating the unknown logic structure of the task.
We define a product between the WFA for high-level task guidance and a Labeled Markov decision process (L-MDP) for low-level control and optimize a cost function that matches the demonstrator's behavior.
arXiv Detail & Related papers (2021-03-10T06:42:10Z) - Semantic based model of Conceptual Work Products for formal verification
of complex interactive systems [3.0458872052651973]
We describe an automatic logic reasoner to verify objective specifications for conceptual work products.
The conceptual work products specifications serve as a fundamental output requirement, which must be clearly stated, correct and solvable.
Our Work Ontology with tools from Semantic Web is needed to translate class and state diagrams for verification of solvability with automatic reasoning.
arXiv Detail & Related papers (2020-08-04T15:10:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.