Impact of Feedback Type on Explanatory Interactive Learning
- URL: http://arxiv.org/abs/2209.12476v1
- Date: Mon, 26 Sep 2022 07:33:54 GMT
- Title: Impact of Feedback Type on Explanatory Interactive Learning
- Authors: Misgina Tsighe Hagos, Kathleen M. Curran, Brian Mac Namee
- Abstract summary: Explanatory Interactive Learning (XIL) collects user feedback on visual model explanations to implement a Human-in-the-Loop (HITL) based interactive learning scenario.
We compare the effectiveness of two different user feedback types in image classification tasks.
We show that identifying and annotating spurious image features that a model finds salient results in superior classification and explanation accuracy than user feedback that tells a model to focus on valid image features.
- Score: 4.039245878626345
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Explanatory Interactive Learning (XIL) collects user feedback on visual model
explanations to implement a Human-in-the-Loop (HITL) based interactive learning
scenario. Different user feedback types will have different impacts on user
experience and the cost associated with collecting feedback since different
feedback types involve different levels of image annotation. Although XIL has
been used to improve classification performance in multiple domains, the impact
of different user feedback types on model performance and explanation accuracy
is not well studied. To guide future XIL work we compare the effectiveness of
two different user feedback types in image classification tasks: (1)
instructing an algorithm to ignore certain spurious image features, and (2)
instructing an algorithm to focus on certain valid image features. We use
explanations from a Gradient-weighted Class Activation Mapping (GradCAM) based
XIL model to support both feedback types. We show that identifying and
annotating spurious image features that a model finds salient results in
superior classification and explanation accuracy than user feedback that tells
a model to focus on valid image features.
Related papers
- Improving Human-Object Interaction Detection via Virtual Image Learning [68.56682347374422]
Human-Object Interaction (HOI) detection aims to understand the interactions between humans and objects.
In this paper, we propose to alleviate the impact of such an unbalanced distribution via Virtual Image Leaning (VIL)
A novel label-to-image approach, Multiple Steps Image Creation (MUSIC), is proposed to create a high-quality dataset that has a consistent distribution with real images.
arXiv Detail & Related papers (2023-08-04T10:28:48Z) - Evaluating how interactive visualizations can assist in finding samples where and how computer vision models make mistakes [1.76602679361245]
We present two interactive visualizations in the context of Sprite, a system for creating Computer Vision (CV) models.
We study how these visualizations help Sprite's users identify (evaluate) and select (plan) images where a model is struggling and can lead to improved performance.
arXiv Detail & Related papers (2023-05-19T14:43:00Z) - Learning Transferable Pedestrian Representation from Multimodal
Information Supervision [174.5150760804929]
VAL-PAT is a novel framework that learns transferable representations to enhance various pedestrian analysis tasks with multimodal information.
We first perform pre-training on LUPerson-TA dataset, where each image contains text and attribute annotations.
We then transfer the learned representations to various downstream tasks, including person reID, person attribute recognition and text-based person search.
arXiv Detail & Related papers (2023-04-12T01:20:58Z) - The Influences of Color and Shape Features in Visual Contrastive
Learning [0.0]
This paper investigates the influences of individual image features (e.g., color and shape) to model performance remain ambiguous.
Experimental results show that compared with supervised representations, contrastive representations tend to cluster with objects of similar color.
arXiv Detail & Related papers (2023-01-29T15:10:14Z) - Unsupervised Feature Clustering Improves Contrastive Representation
Learning for Medical Image Segmentation [18.75543045234889]
Self-supervised instance discrimination is an effective contrastive pretext task to learn feature representations and address limited medical image annotations.
We propose a new self-supervised contrastive learning method that uses unsupervised feature clustering to better select positive and negative image samples.
Our method outperforms state-of-the-art self-supervised contrastive techniques on these tasks.
arXiv Detail & Related papers (2022-11-15T22:54:29Z) - Visual Perturbation-aware Collaborative Learning for Overcoming the
Language Prior Problem [60.0878532426877]
We propose a novel collaborative learning scheme from the viewpoint of visual perturbation calibration.
Specifically, we devise a visual controller to construct two sorts of curated images with different perturbation extents.
The experimental results on two diagnostic VQA-CP benchmark datasets evidently demonstrate its effectiveness.
arXiv Detail & Related papers (2022-07-24T23:50:52Z) - CAD: Co-Adapting Discriminative Features for Improved Few-Shot
Classification [11.894289991529496]
Few-shot classification is a challenging problem that aims to learn a model that can adapt to unseen classes given a few labeled samples.
Recent approaches pre-train a feature extractor, and then fine-tune for episodic meta-learning.
We propose a strategy to cross-attend and re-weight discriminative features for few-shot classification.
arXiv Detail & Related papers (2022-03-25T06:14:51Z) - Graph-based Person Signature for Person Re-Identifications [17.181807593574764]
We propose a new method to effectively aggregate detailed person descriptions (attributes labels) and visual features (body parts and global features) into a graph.
The graph is integrated into a multi-branch multi-task framework for person re-identification.
Our approach achieves competitive results among the state of the art and outperforms other attribute-based or mask-guided methods.
arXiv Detail & Related papers (2021-04-14T10:54:36Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Learning to Compose Hypercolumns for Visual Correspondence [57.93635236871264]
We introduce a novel approach to visual correspondence that dynamically composes effective features by leveraging relevant layers conditioned on the images to match.
The proposed method, dubbed Dynamic Hyperpixel Flow, learns to compose hypercolumn features on the fly by selecting a small number of relevant layers from a deep convolutional neural network.
arXiv Detail & Related papers (2020-07-21T04:03:22Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.