Visual correspondence-based explanations improve AI robustness and
human-AI team accuracy
- URL: http://arxiv.org/abs/2208.00780v5
- Date: Thu, 31 Aug 2023 02:27:48 GMT
- Title: Visual correspondence-based explanations improve AI robustness and
human-AI team accuracy
- Authors: Giang Nguyen, Mohammad Reza Taesiri, Anh Nguyen
- Abstract summary: We propose two novel architectures of self-interpretable image classifiers that first explain, and then predict.
Our models consistently improve (by 1 to 4 points) on out-of-distribution (OOD) datasets.
For the first time, we show that it is possible to achieve complementary human-AI team accuracy (i.e., that is higher than either AI-alone or human-alone) in ImageNet and CUB image classification tasks.
- Score: 7.969008943697552
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explaining artificial intelligence (AI) predictions is increasingly important
and even imperative in many high-stakes applications where humans are the
ultimate decision-makers. In this work, we propose two novel architectures of
self-interpretable image classifiers that first explain, and then predict (as
opposed to post-hoc explanations) by harnessing the visual correspondences
between a query image and exemplars. Our models consistently improve (by 1 to 4
points) on out-of-distribution (OOD) datasets while performing marginally worse
(by 1 to 2 points) on in-distribution tests than ResNet-50 and a $k$-nearest
neighbor classifier (kNN). Via a large-scale, human study on ImageNet and CUB,
our correspondence-based explanations are found to be more useful to users than
kNN explanations. Our explanations help users more accurately reject AI's wrong
decisions than all other tested methods. Interestingly, for the first time, we
show that it is possible to achieve complementary human-AI team accuracy (i.e.,
that is higher than either AI-alone or human-alone), in ImageNet and CUB image
classification tasks.
Related papers
- Zero-Shot Detection of AI-Generated Images [54.01282123570917]
We propose a zero-shot entropy-based detector (ZED) to detect AI-generated images.
Inspired by recent works on machine-generated text detection, our idea is to measure how surprising the image under analysis is compared to a model of real images.
ZED achieves an average improvement of more than 3% over the SoTA in terms of accuracy.
arXiv Detail & Related papers (2024-09-24T08:46:13Z) - Enhanced Prototypical Part Network (EPPNet) For Explainable Image Classification Via Prototypes [16.528373143163275]
We introduce the Enhanced Prototypical Part Network (EPPNet) for image classification.
EPPNet achieves strong performance while discovering relevant prototypes that can be used to explain the classification results.
Our evaluations on the CUB-200-2011 dataset show that the EPPNet outperforms state-of-the-art xAI-based methods.
arXiv Detail & Related papers (2024-08-08T17:26:56Z) - A Sanity Check for AI-generated Image Detection [49.08585395873425]
We present a sanity check on whether the task of AI-generated image detection has been solved.
To quantify the generalization of existing methods, we evaluate 9 off-the-shelf AI-generated image detectors on Chameleon dataset.
We propose AIDE (AI-generated Image DEtector with Hybrid Features), which leverages multiple experts to simultaneously extract visual artifacts and noise patterns.
arXiv Detail & Related papers (2024-06-27T17:59:49Z) - Understanding and Evaluating Human Preferences for AI Generated Images with Instruction Tuning [58.41087653543607]
We first establish a novel Image Quality Assessment (IQA) database for AIGIs, termed AIGCIQA2023+.
This paper presents a MINT-IQA model to evaluate and explain human preferences for AIGIs from Multi-perspectives with INstruction Tuning.
arXiv Detail & Related papers (2024-05-12T17:45:11Z) - Multi-Modal Prompt Learning on Blind Image Quality Assessment [65.0676908930946]
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly.
Traditional methods, hindered by a lack of sufficiently annotated data, have employed the CLIP image-text pretraining model as their backbone to gain semantic awareness.
Recent approaches have attempted to address this mismatch using prompt technology, but these solutions have shortcomings.
This paper introduces an innovative multi-modal prompt-based methodology for IQA.
arXiv Detail & Related papers (2024-04-23T11:45:32Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Advancing Post Hoc Case Based Explanation with Feature Highlighting [0.8287206589886881]
We propose two general algorithms which can isolate multiple clear feature parts in a test image, and then connect them to the explanatory cases found in the training data.
Results demonstrate that the proposed approach appropriately calibrates a users feelings of 'correctness' for ambiguous classifications in real world data.
arXiv Detail & Related papers (2023-11-06T16:34:48Z) - PCNN: Probable-Class Nearest-Neighbor Explanations Improve Fine-Grained Image Classification Accuracy for AIs and Humans [7.655550161309149]
Nearest neighbors (NN) are traditionally used to compute final decisions.
In this paper, we show a novel utility of nearest neighbors: To improve predictions of a frozen, pretrained image classifier C.
Our method consistently improves fine-grained image classification accuracy on CUB-200, Cars-196, and Dogs-120.
arXiv Detail & Related papers (2023-08-25T19:40:56Z) - Foiling Explanations in Deep Neural Networks [0.0]
This paper uncovers a troubling property of explanation methods for image-based DNNs.
We demonstrate how explanations may be arbitrarily manipulated through the use of evolution strategies.
Our novel algorithm is successfully able to manipulate an image in a manner imperceptible to the human eye.
arXiv Detail & Related papers (2022-11-27T15:29:39Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - On Explainability in AI-Solutions: A Cross-Domain Survey [4.394025678691688]
In automatically deriving a system model, AI algorithms learn relations in data that are not detectable for humans.
The more complex a model, the more difficult it is for a human to understand the reasoning for the decisions.
This work provides an extensive survey of literature on this topic, which, to a large part, consists of other surveys.
arXiv Detail & Related papers (2022-10-11T06:21:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.