LIMEADE: From AI Explanations to Advice Taking
- URL: http://arxiv.org/abs/2003.04315v4
- Date: Wed, 12 Oct 2022 22:45:19 GMT
- Title: LIMEADE: From AI Explanations to Advice Taking
- Authors: Benjamin Charles Germain Lee, Doug Downey, Kyle Lo, Daniel S. Weld
- Abstract summary: We introduce LIMEADE, the first framework that translates both positive and negative advice into an update to an arbitrary, underlying opaque model.
We show our method improves accuracy compared to a rigorous baseline on the image classification domains.
For the text modality, we apply our framework to a neural recommender system for scientific papers on a public website.
- Score: 34.581205516506614
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Research in human-centered AI has shown the benefits of systems that can
explain their predictions. Methods that allow an AI to take advice from humans
in response to explanations are similarly useful. While both capabilities are
well-developed for transparent learning models (e.g., linear models and
GA$^2$Ms), and recent techniques (e.g., LIME and SHAP) can generate
explanations for opaque models, little attention has been given to advice
methods for opaque models. This paper introduces LIMEADE, the first general
framework that translates both positive and negative advice (expressed using
high-level vocabulary such as that employed by post-hoc explanations) into an
update to an arbitrary, underlying opaque model. We demonstrate the generality
of our approach with case studies on seventy real-world models across two broad
domains: image classification and text recommendation. We show our method
improves accuracy compared to a rigorous baseline on the image classification
domains. For the text modality, we apply our framework to a neural recommender
system for scientific papers on a public website; our user study shows that our
framework leads to significantly higher perceived user control, trust, and
satisfaction.
Related papers
- Advancing Post Hoc Case Based Explanation with Feature Highlighting [0.8287206589886881]
We propose two general algorithms which can isolate multiple clear feature parts in a test image, and then connect them to the explanatory cases found in the training data.
Results demonstrate that the proposed approach appropriately calibrates a users feelings of 'correctness' for ambiguous classifications in real world data.
arXiv Detail & Related papers (2023-11-06T16:34:48Z) - A Sentence Speaks a Thousand Images: Domain Generalization through
Distilling CLIP with Language Guidance [41.793995960478355]
We propose a novel approach for domain generalization that leverages recent advances in large vision-language models.
The key technical contribution is a new type of regularization that requires the student's learned image representations to be close to the teacher's learned text representations.
We evaluate our proposed method, dubbed RISE, on various benchmark datasets and show that it outperforms several state-of-the-art domain generalization methods.
arXiv Detail & Related papers (2023-09-21T23:06:19Z) - GPT4Image: Can Large Pre-trained Models Help Vision Models on Perception
Tasks? [51.22096780511165]
We present a new learning paradigm in which the knowledge extracted from large pre-trained models are utilized to help models like CNN and ViT learn enhanced representations.
We feed detailed descriptions into a pre-trained encoder to extract text embeddings with rich semantic information that encodes the content of images.
arXiv Detail & Related papers (2023-06-01T14:02:45Z) - Unleashing Text-to-Image Diffusion Models for Visual Perception [84.41514649568094]
VPD (Visual Perception with a pre-trained diffusion model) is a new framework that exploits the semantic information of a pre-trained text-to-image diffusion model in visual perception tasks.
We show that VPD can be faster adapted to downstream visual perception tasks using the proposed VPD.
arXiv Detail & Related papers (2023-03-03T18:59:47Z) - ExAgt: Expert-guided Augmentation for Representation Learning of Traffic
Scenarios [8.879790406465558]
This paper presents ExAgt, a novel method to include expert knowledge for augmenting traffic scenarios.
The ExAgt method is applied in two state-of-the-art cross-view prediction methods.
Results show that the ExAgt method improves representation learning compared to using only standard augmentations.
arXiv Detail & Related papers (2022-07-18T13:55:48Z) - DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting [91.56988987393483]
We present a new framework for dense prediction by implicitly and explicitly leveraging the pre-trained knowledge from CLIP.
Specifically, we convert the original image-text matching problem in CLIP to a pixel-text matching problem and use the pixel-text score maps to guide the learning of dense prediction models.
Our method is model-agnostic, which can be applied to arbitrary dense prediction systems and various pre-trained visual backbones.
arXiv Detail & Related papers (2021-12-02T18:59:32Z) - A Practical Tutorial on Explainable AI Techniques [5.671062637797752]
This tutorial is meant to be the go-to handbook for any audience with a computer science background.
It aims at getting intuitive insights of machine learning models, accompanied with straight, fast, and intuitive explanations out of the box.
arXiv Detail & Related papers (2021-11-13T17:47:31Z) - A Survey on Neural Recommendation: From Collaborative Filtering to
Content and Context Enriched Recommendation [70.69134448863483]
Research in recommendation has shifted to inventing new recommender models based on neural networks.
In recent years, we have witnessed significant progress in developing neural recommender models.
arXiv Detail & Related papers (2021-04-27T08:03:52Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.