Deep Active Learning by Model Interpretability
- URL: http://arxiv.org/abs/2007.12100v4
- Date: Sun, 6 Sep 2020 06:28:52 GMT
- Title: Deep Active Learning by Model Interpretability
- Authors: Qiang Liu and Zhaocheng Liu and Xiaofang Zhu and Yeliang Xiu
- Abstract summary: In this paper, we introduce the linearly separable regions of samples to the problem of active learning.
We propose a novel Deep Active learning approach by Model Interpretability (DAMI)
To keep the maximal representativeness of the entire unlabeled data, DAMI tries to select and label samples on different linearly separable regions.
- Score: 7.3461534048332275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent successes of Deep Neural Networks (DNNs) in a variety of research
tasks, however, heavily rely on the large amounts of labeled samples. This may
require considerable annotation cost in real-world applications. Fortunately,
active learning is a promising methodology to train high-performing model with
minimal annotation cost. In the deep learning context, the critical question of
active learning is how to precisely identify the informativeness of samples for
DNN. In this paper, inspired by piece-wise linear interpretability in DNN, we
introduce the linearly separable regions of samples to the problem of active
learning, and propose a novel Deep Active learning approach by Model
Interpretability (DAMI). To keep the maximal representativeness of the entire
unlabeled data, DAMI tries to select and label samples on different linearly
separable regions introduced by the piece-wise linear interpretability in DNN.
We focus on modeling Multi-Layer Perception (MLP) for modeling tabular data.
Specifically, we use the local piece-wise interpretation in MLP as the
representation of each sample, and directly run K-Center clustering to select
and label samples. To be noted, this whole process of DAMI does not require any
hyper-parameters to tune manually. To verify the effectiveness of our approach,
extensive experiments have been conducted on several tabular datasets. The
experimental results demonstrate that DAMI constantly outperforms several
state-of-the-art compared approaches.
Related papers
- Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - Supervised Gradual Machine Learning for Aspect Category Detection [0.9857683394266679]
Aspect Category Detection (ACD) aims to identify implicit and explicit aspects in a given review sentence.
We propose a novel approach to tackle the ACD task by combining Deep Neural Networks (DNNs) with Gradual Machine Learning (GML) in a supervised setting.
arXiv Detail & Related papers (2024-04-08T07:21:46Z) - Querying Easily Flip-flopped Samples for Deep Active Learning [63.62397322172216]
Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.
One effective selection strategy is to base it on the model's predictive uncertainty, which can be interpreted as a measure of how informative a sample is.
This paper proposes the it least disagree metric (LDM) as the smallest probability of disagreement of the predicted label.
arXiv Detail & Related papers (2024-01-18T08:12:23Z) - Optimal Sample Selection Through Uncertainty Estimation and Its
Application in Deep Learning [22.410220040736235]
We present a theoretically optimal solution for addressing both coreset selection and active learning.
Our proposed method, COPS, is designed to minimize the expected loss of a model trained on subsampled data.
arXiv Detail & Related papers (2023-09-05T14:06:33Z) - Temporal Output Discrepancy for Loss Estimation-based Active Learning [65.93767110342502]
We present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss.
Our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks.
arXiv Detail & Related papers (2022-12-20T19:29:37Z) - Active Learning for Deep Visual Tracking [51.5063680734122]
Convolutional neural networks (CNNs) have been successfully applied to the single target tracking task in recent years.
In this paper, we propose an active learning method for deep visual tracking, which selects and annotates the unlabeled samples to train the deep CNNs model.
Under the guidance of active learning, the tracker based on the trained deep CNNs model can achieve competitive tracking performance while reducing the labeling cost.
arXiv Detail & Related papers (2021-10-17T11:47:56Z) - Deep Active Learning for Text Classification with Diverse
Interpretations [20.202134075256094]
We propose a novel Active Learning with DivErse iNterpretations (ALDEN) approach.
With local interpretations in Deep Neural Networks (DNNs), ALDEN identifies linearly separable regions of samples.
To tackle the text classification problem, we choose the word with the most diverse interpretations to represent the whole sentence.
arXiv Detail & Related papers (2021-08-15T10:42:07Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - On Deep Unsupervised Active Learning [41.579343330613675]
Unsupervised active learning aims to select representative samples in an unsupervised setting for human annotating.
In this paper, we present a novel Deep neural network framework for Unsupervised Active Learning.
arXiv Detail & Related papers (2020-07-28T02:52:21Z) - Causality-aware counterfactual confounding adjustment for feature
representations learned by deep models [14.554818659491644]
Causal modeling has been recognized as a potential solution to many challenging problems in machine learning (ML)
We describe how a recently proposed counterfactual approach can still be used to deconfound the feature representations learned by deep neural network (DNN) models.
arXiv Detail & Related papers (2020-04-20T17:37:36Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.