Visually Analyzing and Steering Zero Shot Learning
- URL: http://arxiv.org/abs/2009.05254v1
- Date: Fri, 11 Sep 2020 06:58:13 GMT
- Title: Visually Analyzing and Steering Zero Shot Learning
- Authors: Saroj Sahoo and Matthew Berger
- Abstract summary: We propose a visual analytics system to help a user analyze and steer zero-shot learning models.
Through usage scenarios, we highlight how our system can help a user improve performance in zero-shot learning.
- Score: 2.802183323381949
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a visual analytics system to help a user analyze and steer
zero-shot learning models. Zero-shot learning has emerged as a viable scenario
for categorizing data that consists of no labeled examples, and thus a
promising approach to minimize data annotation from humans. However, it is
challenging to understand where zero-shot learning fails, the cause of such
failures, and how a user can modify the model to prevent such failures. Our
visualization system is designed to help users diagnose and understand
mispredictions in such models, so that they may gain insight on the behavior of
a model when applied to data associated with categories not seen during
training. Through usage scenarios, we highlight how our system can help a user
improve performance in zero-shot learning.
Related papers
- An Information Theoretic Approach to Machine Unlearning [45.600917449314444]
Key challenge in unlearning is forgetting the necessary data in a timely manner, while preserving model performance.
In this work, we address the zero-shot unlearning scenario, whereby an unlearning algorithm must be able to remove data given only a trained model and the data to be forgotten.
We derive a simple but principled zero-shot unlearning method based on the geometry of the model.
arXiv Detail & Related papers (2024-02-02T13:33:30Z) - Identifying and Mitigating Model Failures through Few-shot CLIP-aided
Diffusion Generation [65.268245109828]
We propose an end-to-end framework to generate text descriptions of failure modes associated with spurious correlations.
These descriptions can be used to generate synthetic data using generative models, such as diffusion models.
Our experiments have shown remarkable textbfimprovements in accuracy ($sim textbf21%$) on hard sub-populations.
arXiv Detail & Related papers (2023-12-09T04:43:49Z) - Flew Over Learning Trap: Learn Unlearnable Samples by Progressive Staged
Training [28.17601195439716]
Unlearning techniques generate unlearnable samples by adding imperceptible perturbations to data for public publishing.
We make the in-depth analysis and observe that models can learn both image features and perturbation features of unlearnable samples at an early stage.
We propose Progressive Staged Training to effectively prevent models from overfitting in learning perturbation features.
arXiv Detail & Related papers (2023-06-03T09:36:16Z) - Addressing Bias in Visualization Recommenders by Identifying Trends in
Training Data: Improving VizML Through a Statistical Analysis of the Plotly
Community Feed [55.41644538483948]
Machine learning is a promising approach to visualization recommendation due to its high scalability and representational power.
Our research project aims to address training bias in machine learning visualization recommendation systems by identifying trends in the training data through statistical analysis.
arXiv Detail & Related papers (2022-03-09T18:36:46Z) - Disrupting Model Training with Adversarial Shortcuts [12.31803688544684]
We present a proof-of-concept approach for the image classification setting.
We propose methods based on the notion of adversarial shortcuts, which encourage models to rely on non-robust signals rather than semantic features.
arXiv Detail & Related papers (2021-06-12T01:04:41Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Adversarial Examples for Unsupervised Machine Learning Models [71.81480647638529]
Adrial examples causing evasive predictions are widely used to evaluate and improve the robustness of machine learning models.
We propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation.
arXiv Detail & Related papers (2021-03-02T17:47:58Z) - Sufficiently Accurate Model Learning for Planning [119.80502738709937]
This paper introduces the constrained Sufficiently Accurate model learning approach.
It provides examples of such problems, and presents a theorem on how close some approximate solutions can be.
The approximate solution quality will depend on the function parameterization, loss and constraint function smoothness, and the number of samples in model learning.
arXiv Detail & Related papers (2021-02-11T16:27:31Z) - Unsupervised Difficulty Estimation with Action Scores [7.6146285961466]
We present a simple method for calculating a difficulty score based on the accumulation of losses for each sample during training.
Our proposed method does not require any modification of the model neither any external supervision, as it can be implemented as callback.
arXiv Detail & Related papers (2020-11-23T15:18:44Z) - How Training Data Impacts Performance in Learning-based Control [67.7875109298865]
This paper derives an analytical relationship between the density of the training data and the control performance.
We formulate a quality measure for the data set, which we refer to as $rho$-gap.
We show how the $rho$-gap can be applied to a feedback linearizing control law.
arXiv Detail & Related papers (2020-05-25T12:13:49Z) - Pattern Learning for Detecting Defect Reports and Improvement Requests
in App Reviews [4.460358746823561]
In this study, we follow novel approaches that target this absence of actionable insights by classifying reviews as defect reports and requests for improvement.
We employ a supervised system that is capable of learning lexico-semantic patterns through genetic programming.
We show that the automatically learned patterns outperform the manually created ones, to be generated.
arXiv Detail & Related papers (2020-04-19T08:13:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.