Fanoos: Multi-Resolution, Multi-Strength, Interactive Explanations for
Learned Systems
- URL: http://arxiv.org/abs/2006.12453v8
- Date: Tue, 15 Feb 2022 21:44:27 GMT
- Title: Fanoos: Multi-Resolution, Multi-Strength, Interactive Explanations for
Learned Systems
- Authors: David Bayani (1), Stefan Mitsch (1) ((1) Carnegie Mellon University)
- Abstract summary: Fanoos is a framework for combining formal verification techniques, search, and user interaction to explore explanations at the desired level of granularity and fidelity.
We demonstrate the ability of Fanoos to produce and adjust the abstractness of explanations in response to user requests on a learned controller for an inverted double pendulum and on a learned CPU usage model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning is becoming increasingly important to control the behavior
of safety and financially critical components in sophisticated environments,
where the inability to understand learned components in general, and neural
nets in particular, poses serious obstacles to their adoption. Explainability
and interpretability methods for learned systems have gained considerable
academic attention, but the focus of current approaches on only one aspect of
explanation, at a fixed level of abstraction, and limited if any formal
guarantees, prevents those explanations from being digestible by the relevant
stakeholders (e.g., end users, certification authorities, engineers) with their
diverse backgrounds and situation-specific needs. We introduce Fanoos, a
framework for combining formal verification techniques, heuristic search, and
user interaction to explore explanations at the desired level of granularity
and fidelity. We demonstrate the ability of Fanoos to produce and adjust the
abstractness of explanations in response to user requests on a learned
controller for an inverted double pendulum and on a learned CPU usage model.
Related papers
- Composite Learning Units: Generalized Learning Beyond Parameter Updates to Transform LLMs into Adaptive Reasoners [0.0]
We introduce Composite Learning Units (CLUs) designed to transform reasoners into learners capable of continuous learning.
CLUs are built on an architecture that allows a reasoning model to maintain and evolve a dynamic knowledge repository.
We demonstrate CLUs' effectiveness through a cryptographic reasoning task, where they continuously evolve their understanding through feedback to uncover hidden transformation rules.
arXiv Detail & Related papers (2024-10-09T02:27:58Z) - Knowledge-Infused Self Attention Transformers [11.008412414253662]
Transformer-based language models have achieved impressive success in various natural language processing tasks.
This paper introduces a systematic method for infusing knowledge into different components of a transformer-based model.
arXiv Detail & Related papers (2023-06-23T13:55:01Z) - Explainable Deep Reinforcement Learning: State of the Art and Challenges [1.005130974691351]
Interpretability, explainability and transparency are key issues to introducing Artificial Intelligence methods in many critical domains.
This article provides a review of state of the art methods for explainable deep reinforcement learning methods.
arXiv Detail & Related papers (2023-01-24T11:41:25Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - Synergistic information supports modality integration and flexible
learning in neural networks solving multiple tasks [107.8565143456161]
We investigate the information processing strategies adopted by simple artificial neural networks performing a variety of cognitive tasks.
Results show that synergy increases as neural networks learn multiple diverse tasks.
randomly turning off neurons during training through dropout increases network redundancy, corresponding to an increase in robustness.
arXiv Detail & Related papers (2022-10-06T15:36:27Z) - Learning Knowledge Representation with Meta Knowledge Distillation for
Single Image Super-Resolution [82.89021683451432]
We propose a model-agnostic meta knowledge distillation method under the teacher-student architecture for the single image super-resolution task.
Experiments conducted on various single image super-resolution datasets demonstrate that our proposed method outperforms existing defined knowledge representation related distillation methods.
arXiv Detail & Related papers (2022-07-18T02:41:04Z) - Sample-Efficient Reinforcement Learning in the Presence of Exogenous
Information [77.19830787312743]
In real-world reinforcement learning applications the learner's observation space is ubiquitously high-dimensional with both relevant and irrelevant information about the task at hand.
We introduce a new problem setting for reinforcement learning, the Exogenous Decision Process (ExoMDP), in which the state space admits an (unknown) factorization into a small controllable component and a large irrelevant component.
We provide a new algorithm, ExoRL, which learns a near-optimal policy with sample complexity in the size of the endogenous component.
arXiv Detail & Related papers (2022-06-09T05:19:32Z) - An Interactive Explanatory AI System for Industrial Quality Control [0.8889304968879161]
We aim to extend the defect detection task towards an interactive human-in-the-loop approach.
We propose an approach for an interactive support system for classifications in an industrial quality control setting.
arXiv Detail & Related papers (2022-03-17T09:04:46Z) - Fast and Slow Learning of Recurrent Independent Mechanisms [80.38910637873066]
We propose a training framework in which the pieces of knowledge an agent needs and its reward function are stationary and can be re-used across tasks.
An attention mechanism dynamically selects which modules can be adapted to the current task.
We find that meta-learning the modular aspects of the proposed system greatly helps in achieving faster adaptation in a reinforcement learning setup.
arXiv Detail & Related papers (2021-05-18T17:50:32Z) - Counterfactual Explanations for Machine Learning: A Review [5.908471365011942]
We review and categorize research on counterfactual explanations in machine learning.
Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries.
arXiv Detail & Related papers (2020-10-20T20:08:42Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.