An Interactive Visualization Tool for Understanding Active Learning
- URL: http://arxiv.org/abs/2111.04936v1
- Date: Tue, 9 Nov 2021 03:33:26 GMT
- Title: An Interactive Visualization Tool for Understanding Active Learning
- Authors: Zihan Wang, Jialin Lu, Oliver Snow, Martin Ester
- Abstract summary: We present an interactive visualization tool to elucidate the training process of active learning.
The tool enables one to select a sample of interesting data points, view how their prediction values change at different querying stages, and thus better understand when and how active learning works.
- Score: 12.345164513513671
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Despite recent progress in artificial intelligence and machine learning, many
state-of-the-art methods suffer from a lack of explainability and transparency.
The ability to interpret the predictions made by machine learning models and
accurately evaluate these models is crucially important. In this paper, we
present an interactive visualization tool to elucidate the training process of
active learning. This tool enables one to select a sample of interesting data
points, view how their prediction values change at different querying stages,
and thus better understand when and how active learning works. Additionally,
users can utilize this tool to compare different active learning strategies
simultaneously and inspect why some strategies outperform others in certain
contexts. With some preliminary experiments, we demonstrate that our
visualization panel has a great potential to be used in various active learning
experiments and help users evaluate their models appropriately.
Related papers
- VAAD: Visual Attention Analysis Dashboard applied to e-Learning [12.849976246445646]
The tool is named VAAD, an acronym for Visual Attention Analysis Dashboard.
VAAD holds the potential to offer valuable insights into online learning behaviors from both descriptive and predictive perspectives.
arXiv Detail & Related papers (2024-05-30T14:27:40Z) - Revisiting Self-supervised Learning of Speech Representation from a
Mutual Information Perspective [68.20531518525273]
We take a closer look into existing self-supervised methods of speech from an information-theoretic perspective.
We use linear probes to estimate the mutual information between the target information and learned representations.
We explore the potential of evaluating representations in a self-supervised fashion, where we estimate the mutual information between different parts of the data without using any labels.
arXiv Detail & Related papers (2024-01-16T21:13:22Z) - What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - Towards Interpretability in Audio and Visual Affective Machine Learning:
A Review [0.0]
We perform a structured literature review to examine the use of interpretability in the context of affective machine learning.
Our findings show an emergence of the use of interpretability methods in the last five years.
Their use is currently limited regarding the range of methods used, the depth of evaluations, and the consideration of use-cases.
arXiv Detail & Related papers (2023-06-15T08:16:01Z) - Task Formulation Matters When Learning Continually: A Case Study in
Visual Question Answering [58.82325933356066]
Continual learning aims to train a model incrementally on a sequence of tasks without forgetting previous knowledge.
We present a detailed study of how different settings affect performance for Visual Question Answering.
arXiv Detail & Related papers (2022-09-30T19:12:58Z) - Self-Supervised Learning of Multi-Object Keypoints for Robotic
Manipulation [8.939008609565368]
In this paper, we demonstrate the efficacy of learning image keypoints via the Dense Correspondence pretext task for downstream policy learning.
We evaluate our approach on diverse robot manipulation tasks, compare it to other visual representation learning approaches, and demonstrate its flexibility and effectiveness for sample-efficient policy learning.
arXiv Detail & Related papers (2022-05-17T13:15:07Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - Visual Adversarial Imitation Learning using Variational Models [60.69745540036375]
Reward function specification remains a major impediment for learning behaviors through deep reinforcement learning.
Visual demonstrations of desired behaviors often presents an easier and more natural way to teach agents.
We develop a variational model-based adversarial imitation learning algorithm.
arXiv Detail & Related papers (2021-07-16T00:15:18Z) - Distill on the Go: Online knowledge distillation in self-supervised
learning [1.1470070927586016]
Recent works have shown that wider and deeper models benefit more from self-supervised learning than smaller models.
We propose Distill-on-the-Go (DoGo), a self-supervised learning paradigm using single-stage online knowledge distillation.
Our results show significant performance gain in the presence of noisy and limited labels.
arXiv Detail & Related papers (2021-04-20T09:59:23Z) - A Survey on Contrastive Self-supervised Learning [0.0]
Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets.
Contrastive learning has recently become a dominant component in self-supervised learning methods for computer vision, natural language processing (NLP), and other domains.
This paper provides an extensive review of self-supervised methods that follow the contrastive approach.
arXiv Detail & Related papers (2020-10-31T21:05:04Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.