JEDAI Explains Decision-Making AI
- URL: http://arxiv.org/abs/2111.00585v1
- Date: Sun, 31 Oct 2021 20:18:45 GMT
- Title: JEDAI Explains Decision-Making AI
- Authors: Trevor Angle, Naman Shah, Pulkit Verma, Siddharth Srivastava
- Abstract summary: JEDAI helps users create high-level, intuitive plans while ensuring that they will be executable by the robot.
It also provides users customized explanations about errors and helps improve their understanding of AI planning.
- Score: 9.581605678437032
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents JEDAI, an AI system designed for outreach and educational
efforts aimed at non-AI experts. JEDAI features a novel synthesis of research
ideas from integrated task and motion planning and explainable AI. JEDAI helps
users create high-level, intuitive plans while ensuring that they will be
executable by the robot. It also provides users customized explanations about
errors and helps improve their understanding of AI planning as well as the
limits and capabilities of the underlying robot system.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - The AI-DEC: A Card-based Design Method for User-centered AI Explanations [20.658833770179903]
We develop a design method, called AI-DEC, that defines four dimensions of AI explanations.
We evaluate this method through co-design sessions with workers in healthcare, finance, and management industries.
We discuss the implications of using the AI-DEC for the user-centered design of AI explanations in real-world systems.
arXiv Detail & Related papers (2024-05-26T22:18:38Z) - LeanAI: A method for AEC practitioners to effectively plan AI
implementations [1.213096549055645]
Despite the enthusiasm regarding the use of AI, 85% of current big data projects fail.
One of the main reasons for AI project failures in the AEC industry is the disconnect between those who plan or decide to use AI and those who implement it.
This work introduces the LeanAI method, which delineates what AI should solve, what it can solve, and what it will solve.
arXiv Detail & Related papers (2023-06-29T09:18:11Z) - End-User Development for Artificial Intelligence: A Systematic
Literature Review [2.347942013388615]
End-User Development (EUD) can allow people to create, customize, or adapt AI-based systems to their own needs.
This paper presents a literature review that aims to shed the light on the current landscape of EUD for AI systems.
arXiv Detail & Related papers (2023-04-14T09:57:36Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Responsible-AI-by-Design: a Pattern Collection for Designing Responsible
AI Systems [12.825892132103236]
Many ethical regulations, principles, and guidelines for responsible AI have been issued recently.
This paper identifies one missing element as the system-level guidance: how to design the architecture of responsible AI systems.
We present a summary of design patterns that can be embedded into the AI systems as product features to contribute to responsible-AI-by-design.
arXiv Detail & Related papers (2022-03-02T07:30:03Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - The MineRL BASALT Competition on Learning from Human Feedback [58.17897225617566]
The MineRL BASALT competition aims to spur forward research on this important class of techniques.
We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions.
We provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline.
arXiv Detail & Related papers (2021-07-05T12:18:17Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.