Leveraging Rationales to Improve Human Task Performance
- URL: http://arxiv.org/abs/2002.04202v1
- Date: Tue, 11 Feb 2020 04:51:35 GMT
- Title: Leveraging Rationales to Improve Human Task Performance
- Authors: Devleena Das, Sonia Chernova
- Abstract summary: Given a computational system's performance exceeds that of its human user, can explainable AI capabilities be leveraged to improve the performance of the human?
We introduce the Rationale-Generating Algorithm, an automated technique for generating rationales for utility-based computational methods.
Results show that our approach produces rationales that lead to statistically significant improvement in human task performance.
- Score: 15.785125079811902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) systems across many application areas are increasingly
demonstrating performance that is beyond that of humans. In response to the
proliferation of such models, the field of Explainable AI (XAI) has sought to
develop techniques that enhance the transparency and interpretability of
machine learning methods. In this work, we consider a question not previously
explored within the XAI and ML communities: Given a computational system whose
performance exceeds that of its human user, can explainable AI capabilities be
leveraged to improve the performance of the human? We study this question in
the context of the game of Chess, for which computational game engines that
surpass the performance of the average player are widely available. We
introduce the Rationale-Generating Algorithm, an automated technique for
generating rationales for utility-based computational methods, which we
evaluate with a multi-day user study against two baselines. The results show
that our approach produces rationales that lead to statistically significant
improvement in human task performance, demonstrating that rationales
automatically generated from an AI's internal task model can be used not only
to explain what the system is doing, but also to instruct the user and
ultimately improve their task performance.
Related papers
- A Human-Centered Approach for Improving Supervised Learning [0.44378250612683995]
This paper shows how we can strike a balance between performance, time, and resource constraints.
Another goal of this research is to make Ensembles more explainable and intelligible using the Human-Centered approach.
arXiv Detail & Related papers (2024-10-14T10:27:14Z) - Enhancing Feature Selection and Interpretability in AI Regression Tasks Through Feature Attribution [38.53065398127086]
This study investigates the potential of feature attribution methods to filter out uninformative features in input data for regression problems.
We introduce a feature selection pipeline that combines Integrated Gradients with k-means clustering to select an optimal set of variables from the initial data space.
To validate the effectiveness of this approach, we apply it to a real-world industrial problem - blade vibration analysis in the development process of turbo machinery.
arXiv Detail & Related papers (2024-09-25T09:50:51Z) - Offline Imitation Learning Through Graph Search and Retrieval [57.57306578140857]
Imitation learning is a powerful machine learning algorithm for a robot to acquire manipulation skills.
We propose GSR, a simple yet effective algorithm that learns from suboptimal demonstrations through Graph Search and Retrieval.
GSR can achieve a 10% to 30% higher success rate and over 30% higher proficiency compared to baselines.
arXiv Detail & Related papers (2024-07-22T06:12:21Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Strategies to exploit XAI to improve classification systems [0.0]
XAI aims to provide insights into the decision-making process of AI models, allowing users to understand their results beyond their decisions.
Most XAI literature focuses on how to explain an AI system, while less attention has been given to how XAI methods can be exploited to improve an AI system.
arXiv Detail & Related papers (2023-06-09T10:38:26Z) - Towards Explainable Artificial Intelligence in Banking and Financial
Services [0.0]
We study and analyze the recent work done in Explainable Artificial Intelligence (XAI) methods and tools.
We introduce a novel XAI process, which facilitates producing explainable models while maintaining a high level of learning performance.
We develop a digital dashboard to facilitate interacting with the algorithm results.
arXiv Detail & Related papers (2021-12-14T08:02:13Z) - Skill Preferences: Learning to Extract and Execute Robotic Skills from
Human Feedback [82.96694147237113]
We present Skill Preferences, an algorithm that learns a model over human preferences and uses it to extract human-aligned skills from offline data.
We show that SkiP enables a simulated kitchen robot to solve complex multi-step manipulation tasks.
arXiv Detail & Related papers (2021-08-11T18:04:08Z) - A Comparative Approach to Explainable Artificial Intelligence Methods in
Application to High-Dimensional Electronic Health Records: Examining the
Usability of XAI [0.0]
XAI aims to produce a demonstrative factor of trust, which for human subjects is achieved through communicative means.
The ideology behind trusting a machine to tend towards the livelihood of a human poses an ethical conundrum.
XAI methods produce visualization of the feature contribution towards a given models output on both a local and global level.
arXiv Detail & Related papers (2021-03-08T18:15:52Z) - A Survey on Large-scale Machine Learning [67.6997613600942]
Machine learning can provide deep insights into data, allowing machines to make high-quality predictions.
Most sophisticated machine learning approaches suffer from huge time costs when operating on large-scale data.
Large-scale Machine Learning aims to learn patterns from big data with comparable performance efficiently.
arXiv Detail & Related papers (2020-08-10T06:07:52Z) - Learning to Complement Humans [67.38348247794949]
A rising vision for AI in the open world centers on the development of systems that can complement humans for perceptual, diagnostic, and reasoning tasks.
We demonstrate how an end-to-end learning strategy can be harnessed to optimize the combined performance of human-machine teams.
arXiv Detail & Related papers (2020-05-01T20:00:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.