Practical Active Learning with Model Selection for Small Data
- URL: http://arxiv.org/abs/2112.11572v1
- Date: Tue, 21 Dec 2021 23:11:27 GMT
- Title: Practical Active Learning with Model Selection for Small Data
- Authors: Maryam Pardakhti, Nila Mandal, Anson W. K. Ma and Qian Yang
- Abstract summary: We develop a simple and fast method for practical active learning with model selection.
Our method is based on an underlying pool-based active learner for binary classification using support vector classification with a radial basis function kernel.
- Score: 13.128648437690224
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning is of great interest for many practical applications,
especially in industry and the physical sciences, where there is a strong need
to minimize the number of costly experiments necessary to train predictive
models. However, there remain significant challenges for the adoption of active
learning methods in many practical applications. One important challenge is
that many methods assume a fixed model, where model hyperparameters are chosen
a priori. In practice, it is rarely true that a good model will be known in
advance. Existing methods for active learning with model selection typically
depend on a medium-sized labeling budget. In this work, we focus on the case of
having a very small labeling budget, on the order of a few dozen data points,
and develop a simple and fast method for practical active learning with model
selection. Our method is based on an underlying pool-based active learner for
binary classification using support vector classification with a radial basis
function kernel. First we show empirically that our method is able to find
hyperparameters that lead to the best performance compared to an oracle model
on less separable, difficult to classify datasets, and reasonable performance
on datasets that are more separable and easier to classify. Then, we
demonstrate that it is possible to refine our model selection method using a
weighted approach to trade-off between achieving optimal performance on
datasets that are easy to classify, versus datasets that are difficult to
classify, which can be tuned based on prior domain knowledge about the dataset.
Related papers
- An information-matching approach to optimal experimental design and active learning [0.9362620873652918]
We introduce an information-matching criterion based on the Fisher Information Matrix to select the most informative training data from a candidate pool.
We demonstrate the effectiveness of this approach across various modeling problems in diverse scientific fields, including power systems and underwater acoustics.
arXiv Detail & Related papers (2024-11-05T02:16:23Z) - LESS: Selecting Influential Data for Targeted Instruction Tuning [64.78894228923619]
We propose LESS, an efficient algorithm to estimate data influences and perform Low-rank gradiEnt Similarity Search for instruction data selection.
We show that training on a LESS-selected 5% of the data can often outperform training on the full dataset across diverse downstream tasks.
Our method goes beyond surface form cues to identify data that the necessary reasoning skills for the intended downstream application.
arXiv Detail & Related papers (2024-02-06T19:18:04Z) - Active Learning with Combinatorial Coverage [0.0]
Active learning is a practical field of machine learning that automates the process of selecting which data to label.
Current methods are effective in reducing the burden of data labeling but are heavily model-reliant.
This has led to the inability of sampled data to be transferred to new models as well as issues with sampling bias.
We propose active learning methods utilizing coverage to overcome these issues.
arXiv Detail & Related papers (2023-02-28T13:43:23Z) - MILO: Model-Agnostic Subset Selection Framework for Efficient Model
Training and Tuning [68.12870241637636]
We propose MILO, a model-agnostic subset selection framework that decouples the subset selection from model training.
Our empirical results indicate that MILO can train models $3times - 10 times$ faster and tune hyperparameters $20times - 75 times$ faster than full-dataset training or tuning without performance.
arXiv Detail & Related papers (2023-01-30T20:59:30Z) - Frugal Reinforcement-based Active Learning [12.18340575383456]
We propose a novel active learning approach for label-efficient training.
The proposed method is iterative and aims at minimizing a constrained objective function that mixes diversity, representativity and uncertainty criteria.
We also introduce a novel weighting mechanism based on reinforcement learning, which adaptively balances these criteria at each training iteration.
arXiv Detail & Related papers (2022-12-09T14:17:45Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - A Lagrangian Duality Approach to Active Learning [119.36233726867992]
We consider the batch active learning problem, where only a subset of the training data is labeled.
We formulate the learning problem using constrained optimization, where each constraint bounds the performance of the model on labeled samples.
We show, via numerical experiments, that our proposed approach performs similarly to or better than state-of-the-art active learning methods.
arXiv Detail & Related papers (2022-02-08T19:18:49Z) - Towards General and Efficient Active Learning [20.888364610175987]
Active learning aims to select the most informative samples to exploit limited annotation budgets.
We propose a novel general and efficient active learning (GEAL) method in this paper.
Our method can conduct data selection processes on different datasets with a single-pass inference of the same model.
arXiv Detail & Related papers (2021-12-15T08:35:28Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Diverse Complexity Measures for Dataset Curation in Self-driving [80.55417232642124]
We propose a new data selection method that exploits a diverse set of criteria that quantize interestingness of traffic scenes.
Our experiments show that the proposed curation pipeline is able to select datasets that lead to better generalization and higher performance.
arXiv Detail & Related papers (2021-01-16T23:45:02Z) - Monotonic Cardinality Estimation of Similarity Selection: A Deep
Learning Approach [22.958342743597044]
We investigate the possibilities of utilizing deep learning for cardinality estimation of similarity selection.
We propose a novel and generic method that can be applied to any data type and distance function.
arXiv Detail & Related papers (2020-02-15T20:22:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.