A taxonomy of explanations to support Explainability-by-Design
- URL: http://arxiv.org/abs/2206.04438v1
- Date: Thu, 9 Jun 2022 11:59:42 GMT
- Title: A taxonomy of explanations to support Explainability-by-Design
- Authors: Niko Tsakalakis, Sophie Stalla-Bourdillon, Trung Dong Huynh, Luc
Moreau
- Abstract summary: We present a taxonomy of explanations that was developed as part of a holistic 'Explainability-by-Design' approach.
The taxonomy was built with a view to produce explanations for a wide range of requirements stemming from a variety of regulatory frameworks or policies.
It is used as a stand-alone classifier of explanations conceived as detective controls, in order to aid supportive automated compliance strategies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As automated decision-making solutions are increasingly applied to all
aspects of everyday life, capabilities to generate meaningful explanations for
a variety of stakeholders (i.e., decision-makers, recipients of decisions,
auditors, regulators...) become crucial. In this paper, we present a taxonomy
of explanations that was developed as part of a holistic
'Explainability-by-Design' approach for the purposes of the project PLEAD. The
taxonomy was built with a view to produce explanations for a wide range of
requirements stemming from a variety of regulatory frameworks or policies set
at the organizational level either to translate high-level compliance
requirements or to meet business needs. The taxonomy comprises nine dimensions.
It is used as a stand-alone classifier of explanations conceived as detective
controls, in order to aid supportive automated compliance strategies. A
machinereadable format of the taxonomy is provided in the form of a light
ontology and the benefits of starting the Explainability-by-Design journey with
such a taxonomy are demonstrated through a series of examples.
Related papers
- An Adaptive Framework for Generating Systematic Explanatory Answer in Online Q&A Platforms [62.878616839799776]
We propose SynthRAG, an innovative framework designed to enhance Question Answering (QA) performance.
SynthRAG improves on conventional models by employing adaptive outlines for dynamic content structuring.
An online deployment on the Zhihu platform revealed that SynthRAG's answers achieved notable user engagement.
arXiv Detail & Related papers (2024-10-23T09:14:57Z) - Explaining Non-monotonic Normative Reasoning using Argumentation Theory with Deontic Logic [7.162465547358201]
This paper explores how to provide designers with effective explanations for their legally relevant design decisions.
We extend the previous system for providing explanations by specifying norms and the key legal or ethical principles for justifying actions in normative contexts.
Considering that first-order logic has strong expressive power, in the current paper we adopt a first-order deontic logic system with deontic operators and preferences.
arXiv Detail & Related papers (2024-09-18T08:03:29Z) - Diffexplainer: Towards Cross-modal Global Explanations with Diffusion Models [51.21351775178525]
DiffExplainer is a novel framework that, leveraging language-vision models, enables multimodal global explainability.
It employs diffusion models conditioned on optimized text prompts, synthesizing images that maximize class outputs.
The analysis of generated visual descriptions allows for automatic identification of biases and spurious features.
arXiv Detail & Related papers (2024-04-03T10:11:22Z) - Towards a Framework for Evaluating Explanations in Automated Fact Verification [12.904145308839997]
As deep neural models in NLP become more complex, the necessity to interpret them becomes greater.
A burgeoning interest has emerged in rationalizing explanations to provide short and coherent justifications for predictions.
We advocate for a formal framework for key concepts and properties about rationalizing explanations to support their evaluation systematically.
arXiv Detail & Related papers (2024-03-29T17:50:28Z) - An Encoding of Abstract Dialectical Frameworks into Higher-Order Logic [57.24311218570012]
This approach allows for the computer-assisted analysis of abstract dialectical frameworks.
Exemplary applications include the formal analysis and verification of meta-theoretical properties.
arXiv Detail & Related papers (2023-12-08T09:32:26Z) - A Taxonomy of Decentralized Identifier Methods for Practitioners [50.76687001060655]
A core part of the new identity management paradigm of Self-Sovereign Identity (SSI) is the W3C Decentralized Identifiers (DIDs) standard.
We propose a taxonomy of DID methods with the goal to empower practitioners to make informed decisions when selecting DID methods.
arXiv Detail & Related papers (2023-10-18T13:01:40Z) - Interpretability is not Explainability: New Quantitative XAI Approach
with a focus on Recommender Systems in Education [0.0]
We propose a novel taxonomy that provides a clear and unambiguous understanding of the key concepts and relationships in XAI.
Our approach is rooted in a systematic analysis of existing definitions and frameworks.
This comprehensive taxonomy aims to establish a shared vocabulary for future research.
arXiv Detail & Related papers (2023-09-18T11:59:02Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Flexible categorization for auditing using formal concept analysis and
Dempster-Shafer theory [55.878249096379804]
We study different ways to categorize according to different extents of interest in different financial accounts.
The framework developed in this paper provides a formal ground to obtain and study explainable categorizations.
arXiv Detail & Related papers (2022-10-31T13:49:16Z) - A Methodology and Software Architecture to Support
Explainability-by-Design [0.0]
This paper describes Explainability-by-Design, a holistic methodology characterised by proactive measures to include explanation capability in the design of decision-making systems.
The methodology consists of three phases: (A) Explanation Requirement Analysis, (B) Explanation Technical Design, and (C) Explanation Validation.
It was shown that the approach is tractable in terms of development time, which can be as low as two hours per sentence.
arXiv Detail & Related papers (2022-06-13T15:34:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.