Explainability via Responsibility
- URL: http://arxiv.org/abs/2010.01676v1
- Date: Sun, 4 Oct 2020 20:41:03 GMT
- Title: Explainability via Responsibility
- Authors: Faraz Khadivpour and Matthew Guzdial
- Abstract summary: We present an approach to explainable artificial intelligence in which certain training instances are offered to human users.
We evaluate this approach by approximating its ability to provide human users with the explanations of AI agent's actions.
- Score: 0.9645196221785693
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Procedural Content Generation via Machine Learning (PCGML) refers to a group
of methods for creating game content (e.g. platformer levels, game maps, etc.)
using machine learning models. PCGML approaches rely on black box models, which
can be difficult to understand and debug by human designers who do not have
expert knowledge about machine learning. This can be even more tricky in
co-creative systems where human designers must interact with AI agents to
generate game content. In this paper we present an approach to explainable
artificial intelligence in which certain training instances are offered to
human users as an explanation for the AI agent's actions during a co-creation
process. We evaluate this approach by approximating its ability to provide
human users with the explanations of AI agent's actions and helping them to
more efficiently cooperate with the AI agent.
Related papers
- Explaining Explaining [0.882727051273924]
Explanation is key to people having confidence in high-stakes AI systems.
Machine-learning-based systems can't explain because they are usually black boxes.
We describe a hybrid approach to developing cognitive agents.
arXiv Detail & Related papers (2024-09-26T16:55:44Z) - Combining Cognitive and Generative AI for Self-explanation in Interactive AI Agents [1.1259354267881174]
This study investigates the convergence of cognitive AI and generative AI for self-explanation in interactive AI agents such as VERA.
From a cognitive AI viewpoint, we endow VERA with a functional model of its own design, knowledge, and reasoning represented in the Task--Method--Knowledge (TMK) language.
From the perspective of generative AI, we use ChatGPT, LangChain, and Chain-of-Thought to answer user questions based on the VERA TMK model.
arXiv Detail & Related papers (2024-07-25T18:46:11Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Towards Reconciling Usability and Usefulness of Explainable AI
Methodologies [2.715884199292287]
Black-box AI systems can lead to liability and accountability issues when they produce an incorrect decision.
Explainable AI (XAI) seeks to bridge the knowledge gap, between developers and end-users.
arXiv Detail & Related papers (2023-01-13T01:08:49Z) - Explainability Via Causal Self-Talk [9.149689942389923]
Explaining the behavior of AI systems is an important problem that, in practice, is generally avoided.
We describe an effective way to satisfy all the desiderata: train the AI system to build a causal model of itself.
We implement this method in a simulated 3D environment, and show how it enables agents to generate faithful and semantically-meaningful explanations.
arXiv Detail & Related papers (2022-11-17T23:17:01Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - The MineRL BASALT Competition on Learning from Human Feedback [58.17897225617566]
The MineRL BASALT competition aims to spur forward research on this important class of techniques.
We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions.
We provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline.
arXiv Detail & Related papers (2021-07-05T12:18:17Z) - Who is this Explanation for? Human Intelligence and Knowledge Graphs for
eXplainable AI [0.0]
We focus on the contributions that Human Intelligence can bring to eXplainable AI.
We call for a better interplay between Knowledge Representation and Reasoning, Social Sciences, Human Computation and Human-Machine Cooperation research.
arXiv Detail & Related papers (2020-05-27T10:47:15Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.