A Knowledge Driven Approach to Adaptive Assistance Using Preference
Reasoning and Explanation
- URL: http://arxiv.org/abs/2012.02904v1
- Date: Sat, 5 Dec 2020 00:18:43 GMT
- Title: A Knowledge Driven Approach to Adaptive Assistance Using Preference
Reasoning and Explanation
- Authors: Jason R. Wilson, Leilani Gilpin, Irina Rabkina
- Abstract summary: We propose the robot uses Analogical Theory of Mind to infer what the user is trying to do.
If the user is unsure or confused, the robot provides the user with an explanation.
- Score: 3.8673630752805432
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is a need for socially assistive robots (SARs) to provide transparency
in their behavior by explaining their reasoning. Additionally, the reasoning
and explanation should represent the user's preferences and goals. To work
towards satisfying this need for interpretable reasoning and representations,
we propose the robot uses Analogical Theory of Mind to infer what the user is
trying to do and uses the Hint Engine to find an appropriate assistance based
on what the user is trying to do. If the user is unsure or confused, the robot
provides the user with an explanation, generated by the Explanation
Synthesizer. The explanation helps the user understand what the robot inferred
about the user's preferences and why the robot decided to provide the
assistance it gave. A knowledge-driven approach provides transparency to
reasoning about preferences, assistance, and explanations, thereby facilitating
the incorporation of user feedback and allowing the robot to learn and adapt to
the user.
Related papers
- Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Conceptualizing the Relationship between AI Explanations and User Agency [0.9051087836811617]
We analyze the relationship between agency and explanations through a user-centric lens through case studies and thought experiments.
We find that explanation serves as one of several possible first steps for agency by allowing the user convert forethought to outcome in a more effective manner in future interactions.
arXiv Detail & Related papers (2023-12-05T23:56:05Z) - What Matters to You? Towards Visual Representation Alignment for Robot
Learning [81.30964736676103]
When operating in service of people, robots need to optimize rewards aligned with end-user preferences.
We propose Representation-Aligned Preference-based Learning (RAPL), a method for solving the visual representation alignment problem.
arXiv Detail & Related papers (2023-10-11T23:04:07Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Evaluating Human-like Explanations for Robot Actions in Reinforcement
Learning Scenarios [1.671353192305391]
We make use of human-like explanations built from the probability of success to complete the goal that an autonomous robot shows after performing an action.
These explanations are intended to be understood by people who have no or very little experience with artificial intelligence methods.
arXiv Detail & Related papers (2022-07-07T10:40:24Z) - Understanding a Robot's Guiding Ethical Principles via Automatically
Generated Explanations [4.393037165265444]
We build upon an existing ethical framework to allow users to make suggestions about plans and receive automatically generated contrastive explanations.
Results of a user study indicate that the generated explanations help humans to understand the ethical principles that underlie a robot's plan.
arXiv Detail & Related papers (2022-06-20T22:55:00Z) - Explainable Predictive Process Monitoring: A User Evaluation [62.41400549499849]
Explainability is motivated by the lack of transparency of black-box Machine Learning approaches.
We carry on a user evaluation on explanation approaches for Predictive Process Monitoring.
arXiv Detail & Related papers (2022-02-15T22:24:21Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Towards Personalized Explanation of Robot Path Planning via User
Feedback [1.7231251035416644]
We present a system for generating personalized explanations of robot path planning via user feedback.
The system is capable of detecting and resolving any preference conflict via user interaction.
arXiv Detail & Related papers (2020-11-01T15:10:43Z) - Towards Transparent Robotic Planning via Contrastive Explanations [1.7231251035416644]
We formalize the notion of contrastive explanations for robotic planning policies based on Markov decision processes.
We present methods for the automated generation of contrastive explanations with three key factors: selectiveness, constrictiveness, and responsibility.
arXiv Detail & Related papers (2020-03-16T19:44:31Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.