Active Learning with Multiple Kernels
- URL: http://arxiv.org/abs/2005.03188v1
- Date: Thu, 7 May 2020 00:48:13 GMT
- Title: Active Learning with Multiple Kernels
- Authors: Songnam Hong and Jeongmin Chae
- Abstract summary: We introduce a new research problem, termed (stream-based) active multiple kernel learning (AMKL)
AMKL allows a learner to label selected data from an oracle according to a selection criterion.
We propose AMKL with an adaptive kernel selection (AMKL-AKS) in which irrelevant kernels can be excluded from a kernel dictionary 'on the fly'
- Score: 10.203602318836444
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online multiple kernel learning (OMKL) has provided an attractive performance
in nonlinear function learning tasks. Leveraging a random feature
approximation, the major drawback of OMKL, known as the curse of
dimensionality, has been recently alleviated. In this paper, we introduce a new
research problem, termed (stream-based) active multiple kernel learning (AMKL),
in which a learner is allowed to label selected data from an oracle according
to a selection criterion. This is necessary in many real-world applications as
acquiring true labels is costly or time-consuming. We prove that AMKL achieves
an optimal sublinear regret, implying that the proposed selection criterion
indeed avoids unuseful label-requests. Furthermore, we propose AMKL with an
adaptive kernel selection (AMKL-AKS) in which irrelevant kernels can be
excluded from a kernel dictionary 'on the fly'. This approach can improve the
efficiency of active learning as well as the accuracy of a function
approximation. Via numerical tests with various real datasets, it is
demonstrated that AMKL-AKS yields a similar or better performance than the
best-known OMKL, with a smaller number of labeled data.
Related papers
- DKL-KAN: Scalable Deep Kernel Learning using Kolmogorov-Arnold Networks [0.0]
We introduce a scalable deep kernel using KAN (DKL-KAN) as an effective alternative to DKL using DKL-MLP.
We analyze two variants of DKL-KAN for a fair comparison with DKL-MLP.
The efficacy of DKL-KAN is evaluated in terms of computational training time and test prediction accuracy across a wide range of applications.
arXiv Detail & Related papers (2024-07-30T20:30:44Z) - Embedded Multi-label Feature Selection via Orthogonal Regression [45.55795914923279]
State-of-the-art embedded multi-label feature selection algorithms based on at least square regression cannot preserve sufficient discriminative information in multi-label data.
A novel embedded multi-label feature selection method is proposed to facilitate the multi-label feature selection.
Extensive experimental results on ten multi-label data sets demonstrate the effectiveness of GRROOR.
arXiv Detail & Related papers (2024-03-01T06:18:40Z) - Personalized Online Federated Learning with Multiple Kernels [26.823435733330705]
Federated learning enables a group of learners (called clients) to train an MKL model on the data distributed among clients.
The present paper develops an algorithmic framework to enable clients to communicate with the server.
We prove that using the proposed online federated MKL algorithm, each client enjoys sub-linear regret with respect to the RF approximation of its best kernel in hindsight.
arXiv Detail & Related papers (2023-11-09T02:51:37Z) - MyriadAL: Active Few Shot Learning for Histopathology [10.652626309100889]
We introduce an active few shot learning framework, Myriad Active Learning (MAL)
MAL includes a contrastive-learning encoder, pseudo-label generation, and novel query sample selection in the loop.
Experiments on two public histopathology datasets show that MAL has superior test accuracy, macro F1-score, and label efficiency compared to prior works.
arXiv Detail & Related papers (2023-10-24T20:08:15Z) - Prompt-driven efficient Open-set Semi-supervised Learning [52.30303262499391]
Open-set semi-supervised learning (OSSL) has attracted growing interest, which investigates a more practical scenario where out-of-distribution (OOD) samples are only contained in unlabeled data.
We propose a prompt-driven efficient OSSL framework, called OpenPrompt, which can propagate class information from labeled to unlabeled data with only a small number of trainable parameters.
arXiv Detail & Related papers (2022-09-28T16:25:08Z) - ALLSH: Active Learning Guided by Local Sensitivity and Hardness [98.61023158378407]
We propose to retrieve unlabeled samples with a local sensitivity and hardness-aware acquisition function.
Our method achieves consistent gains over the commonly used active learning strategies in various classification tasks.
arXiv Detail & Related papers (2022-05-10T15:39:11Z) - A Lagrangian Duality Approach to Active Learning [119.36233726867992]
We consider the batch active learning problem, where only a subset of the training data is labeled.
We formulate the learning problem using constrained optimization, where each constraint bounds the performance of the model on labeled samples.
We show, via numerical experiments, that our proposed approach performs similarly to or better than state-of-the-art active learning methods.
arXiv Detail & Related papers (2022-02-08T19:18:49Z) - Compactness Score: A Fast Filter Method for Unsupervised Feature
Selection [66.84571085643928]
We propose a fast unsupervised feature selection method, named as, Compactness Score (CSUFS) to select desired features.
Our proposed algorithm seems to be more accurate and efficient compared with existing algorithms.
arXiv Detail & Related papers (2022-01-31T13:01:37Z) - MetaKernel: Learning Variational Random Features with Limited Labels [120.90737681252594]
Few-shot learning deals with the fundamental and challenging problem of learning from a few annotated samples, while being able to generalize well on new tasks.
We propose meta-learning kernels with random Fourier features for few-shot learning, we call Meta Kernel.
arXiv Detail & Related papers (2021-05-08T21:24:09Z) - Graph-Aided Online Multi-Kernel Learning [12.805267089186533]
This paper studies data-driven selection of kernels from the dictionary that provide satisfactory function approximations.
Based on the similarities among kernels, the novel framework constructs and refines a graph to assist choosing a subset of kernels.
Our proposed algorithms enjoy tighter sub-linear regret bound compared with state-of-art graph-based online MKL alternatives.
arXiv Detail & Related papers (2021-02-09T07:43:29Z) - ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for
Semi-supervised Continual Learning [52.831894583501395]
Continual learning assumes the incoming data are fully labeled, which might not be applicable in real applications.
We propose deep Online Replay with Discriminator Consistency (ORDisCo) to interdependently learn a classifier with a conditional generative adversarial network (GAN)
We show ORDisCo achieves significant performance improvement on various semi-supervised learning benchmark datasets for SSCL.
arXiv Detail & Related papers (2021-01-02T09:04:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.