A Framework and Benchmark for Deep Batch Active Learning for Regression
- URL: http://arxiv.org/abs/2203.09410v4
- Date: Tue, 1 Aug 2023 13:05:32 GMT
- Title: A Framework and Benchmark for Deep Batch Active Learning for Regression
- Authors: David Holzm\"uller, Viktor Zaverkin, Johannes K\"astner, Ingo
Steinwart
- Abstract summary: We study active learning methods that adaptively select batches of unlabeled data for labeling.
We present a framework for constructing such methods out of (network-dependent) base kernels, kernel transformations, and selection methods.
Our proposed method outperforms the state-of-the-art on our benchmark, scales to large data sets, and works out-of-the-box without adjusting the network architecture or training code.
- Score: 2.093287944284448
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The acquisition of labels for supervised learning can be expensive. To
improve the sample efficiency of neural network regression, we study active
learning methods that adaptively select batches of unlabeled data for labeling.
We present a framework for constructing such methods out of (network-dependent)
base kernels, kernel transformations, and selection methods. Our framework
encompasses many existing Bayesian methods based on Gaussian process
approximations of neural networks as well as non-Bayesian methods.
Additionally, we propose to replace the commonly used last-layer features with
sketched finite-width neural tangent kernels and to combine them with a novel
clustering method. To evaluate different methods, we introduce an open-source
benchmark consisting of 15 large tabular regression data sets. Our proposed
method outperforms the state-of-the-art on our benchmark, scales to large data
sets, and works out-of-the-box without adjusting the network architecture or
training code. We provide open-source code that includes efficient
implementations of all kernels, kernel transformations, and selection methods,
and can be used for reproducing our results.
Related papers
- Bandit-Driven Batch Selection for Robust Learning under Label Noise [20.202806541218944]
We introduce a novel approach for batch selection in Gradient Descent (SGD) training, leveraging bandit algorithms.
Our methodology focuses on optimizing the learning process in the presence of label noise, a prevalent issue in real-world datasets.
arXiv Detail & Related papers (2023-10-31T19:19:01Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Gradient-Matching Coresets for Rehearsal-Based Continual Learning [6.243028964381449]
The goal of continual learning (CL) is to efficiently update a machine learning model with new data without forgetting previously-learned knowledge.
Most widely-used CL methods rely on a rehearsal memory of data points to be reused while training on new data.
We devise a coreset selection method for rehearsal-based continual learning.
arXiv Detail & Related papers (2022-03-28T07:37:17Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - End-to-End Learning of Deep Kernel Acquisition Functions for Bayesian
Optimization [39.56814839510978]
We propose a meta-learning method for Bayesian optimization with neural network-based kernels.
Our model is trained by a reinforcement learning framework from multiple tasks.
In experiments using three text document datasets, we demonstrate that the proposed method achieves better BO performance than the existing methods.
arXiv Detail & Related papers (2021-11-01T00:42:31Z) - Gated recurrent units and temporal convolutional network for multilabel
classification [122.84638446560663]
This work proposes a new ensemble method for managing multilabel classification.
The core of the proposed approach combines a set of gated recurrent units and temporal convolutional neural networks trained with variants of the Adam gradients optimization approach.
arXiv Detail & Related papers (2021-10-09T00:00:16Z) - MetaKernel: Learning Variational Random Features with Limited Labels [120.90737681252594]
Few-shot learning deals with the fundamental and challenging problem of learning from a few annotated samples, while being able to generalize well on new tasks.
We propose meta-learning kernels with random Fourier features for few-shot learning, we call Meta Kernel.
arXiv Detail & Related papers (2021-05-08T21:24:09Z) - Meta-learning representations for clustering with infinite Gaussian
mixture models [39.56814839510978]
We propose a meta-learning method that train neural networks for obtaining representations such that clustering performance improves.
The proposed method can cluster unseen unlabeled data using knowledge meta-learned with labeled data that are different from the unlabeled data.
arXiv Detail & Related papers (2021-03-01T02:05:31Z) - The Unreasonable Effectiveness of Patches in Deep Convolutional Kernels
Methods [0.0]
We show the importance of a data-dependent feature extraction step that is key to the obtain good performance in convolutional kernel methods.
We scale this method to the challenging ImageNet dataset, showing such a simple approach can exceed all existing non-learned representation methods.
arXiv Detail & Related papers (2021-01-19T09:30:58Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Ensemble Wrapper Subsampling for Deep Modulation Classification [70.91089216571035]
Subsampling of received wireless signals is important for relaxing hardware requirements as well as the computational cost of signal processing algorithms.
We propose a subsampling technique to facilitate the use of deep learning for automatic modulation classification in wireless communication systems.
arXiv Detail & Related papers (2020-05-10T06:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.