Longitudinal Distance: Towards Accountable Instance Attribution
- URL: http://arxiv.org/abs/2108.10437v1
- Date: Mon, 23 Aug 2021 22:50:23 GMT
- Title: Longitudinal Distance: Towards Accountable Instance Attribution
- Authors: Rosina O. Weber, Prateek Goel, Shideh Amiri, and Gideon Simpson
- Abstract summary: This paper introduces a pseudo-metric we call Longitudinal distance and its use to attribute instances to a neural network agent's decision.
Inspired in case-based reasoning principles, this paper introduces a pseudo-metric we call Longitudinal distance and its use to attribute instances to a neural network agent's decision.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Previous research in interpretable machine learning (IML) and explainable
artificial intelligence (XAI) can be broadly categorized as either focusing on
seeking interpretability in the agent's model (i.e., IML) or focusing on the
context of the user in addition to the model (i.e., XAI). The former can be
categorized as feature or instance attribution. Example- or sample-based
methods such as those using or inspired by case-based reasoning (CBR) rely on
various approaches to select instances that are not necessarily attributing
instances responsible for an agent's decision. Furthermore, existing approaches
have focused on interpretability and explainability but fall short when it
comes to accountability. Inspired in case-based reasoning principles, this
paper introduces a pseudo-metric we call Longitudinal distance and its use to
attribute instances to a neural network agent's decision that can be
potentially used to build accountable CBR agents.
Related papers
- LaRS: Latent Reasoning Skills for Chain-of-Thought Reasoning [61.7853049843921]
Chain-of-thought (CoT) prompting is a popular in-context learning approach for large language models (LLMs)
This paper introduces a new approach named Latent Reasoning Skills (LaRS) that employs unsupervised learning to create a latent space representation of rationales.
arXiv Detail & Related papers (2023-12-07T20:36:10Z) - Context-aware feature attribution through argumentation [0.0]
We define a novel feature attribution framework called Context-Aware Feature Attribution Through Argumentation (CA-FATA)
Our framework harnesses the power of argumentation by treating each feature as an argument that can either support, attack or neutralize a prediction.
arXiv Detail & Related papers (2023-10-24T20:02:02Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Semantic Role Labeling Meets Definition Modeling: Using Natural Language
to Describe Predicate-Argument Structures [104.32063681736349]
We present an approach to describe predicate-argument structures using natural language definitions instead of discrete labels.
Our experiments and analyses on PropBank-style and FrameNet-style, dependency-based and span-based SRL also demonstrate that a flexible model with an interpretable output does not necessarily come at the expense of performance.
arXiv Detail & Related papers (2022-12-02T11:19:16Z) - Explainable Reinforcement Learning via Model Transforms [18.385505289067023]
We argue that even if the underlying Markov Decision Process is not fully known, it can nevertheless be exploited to automatically generate explanations.
We suggest using formal MDP abstractions and transforms, previously used in the literature for expediting the search for optimal policies, to automatically produce explanations.
arXiv Detail & Related papers (2022-09-24T13:18:06Z) - Automated Machine Learning, Bounded Rationality, and Rational
Metareasoning [62.997667081978825]
We will look at automated machine learning (AutoML) and related problems from the perspective of bounded rationality.
Taking actions under bounded resources requires an agent to reflect on how to use these resources in an optimal way.
arXiv Detail & Related papers (2021-09-10T09:10:20Z) - Learning Causal Models of Autonomous Agents using Interventions [11.351235628684252]
We extend the analysis of an agent assessment module that lets an AI system execute high-level instruction sequences in simulators.
We show that such a primitive query-response capability is sufficient to efficiently derive a user-interpretable causal model of the system.
arXiv Detail & Related papers (2021-08-21T21:33:26Z) - Validation and Inference of Agent Based Models [0.0]
Agent Based Modelling (ABM) is a computational framework for simulating the behaviours and interactions of autonomous agents.
Recent research in ABC has yielded increasingly efficient algorithms for calculating the approximate likelihood.
These are investigated and compared using a pedestrian model in the Hamilton CBD.
arXiv Detail & Related papers (2021-07-08T05:53:37Z) - Multi-Agent Reinforcement Learning with Temporal Logic Specifications [65.79056365594654]
We study the problem of learning to satisfy temporal logic specifications with a group of agents in an unknown environment.
We develop the first multi-agent reinforcement learning technique for temporal logic specifications.
We provide correctness and convergence guarantees for our main algorithm.
arXiv Detail & Related papers (2021-02-01T01:13:03Z) - Multi-Agent Systems based on Contextual Defeasible Logic considering
Focus [0.0]
We extend previous work on distributed reasoning using Contextual Defeasible Logic (CDL)
This work presents a multi-agent model based on CDL that allows agents to reason with their local knowledge bases and mapping rules.
We present a use case scenario, some formalisations of the model proposed, and an initial implementation based on the BDI (Belief-Desire-Intention) agent model.
arXiv Detail & Related papers (2020-10-01T01:50:08Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.