Weighted Ensembles for Active Learning with Adaptivity
- URL: http://arxiv.org/abs/2206.05009v1
- Date: Fri, 10 Jun 2022 11:48:49 GMT
- Title: Weighted Ensembles for Active Learning with Adaptivity
- Authors: Konstantinos D. Polyzos, Qin Lu, Georgios B. Giannakis
- Abstract summary: This paper presents an ensemble of GP models with weights adapted to the labeled data collected incrementally.
Building on this novel EGP model, a suite of acquisition functions emerges based on the uncertainty and disagreement rules.
An adaptively weighted ensemble of EGP-based acquisition functions is also introduced to further robustify performance.
- Score: 60.84896785303314
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Labeled data can be expensive to acquire in several application domains,
including medical imaging, robotics, and computer vision. To efficiently train
machine learning models under such high labeling costs, active learning (AL)
judiciously selects the most informative data instances to label on-the-fly.
This active sampling process can benefit from a statistical function model,
that is typically captured by a Gaussian process (GP). While most GP-based AL
approaches rely on a single kernel function, the present contribution advocates
an ensemble of GP models with weights adapted to the labeled data collected
incrementally. Building on this novel EGP model, a suite of acquisition
functions emerges based on the uncertainty and disagreement rules. An
adaptively weighted ensemble of EGP-based acquisition functions is also
introduced to further robustify performance. Extensive tests on synthetic and
real datasets showcase the merits of the proposed EGP-based approaches with
respect to the single GP-based AL alternatives.
Related papers
- GP+: A Python Library for Kernel-based learning via Gaussian Processes [0.0]
We introduce GP+, an open-source library for kernel-based learning via Gaussian processes (GPs)
GP+ is built on PyTorch and provides a user-friendly and object-oriented tool for probabilistic learning and inference.
arXiv Detail & Related papers (2023-12-12T19:39:40Z) - Beyond Intuition, a Framework for Applying GPs to Real-World Data [21.504659500727985]
We propose a framework to identify the suitability of GPs to a given problem and how to set up a robust and well-specified GP model.
We apply the framework to a case study of glacier elevation change yielding more accurate results at test time.
arXiv Detail & Related papers (2023-07-06T16:08:47Z) - Interactive Segmentation as Gaussian Process Classification [58.44673380545409]
Click-based interactive segmentation (IS) aims to extract the target objects under user interaction.
Most of the current deep learning (DL)-based methods mainly follow the general pipelines of semantic segmentation.
We propose to formulate the IS task as a Gaussian process (GP)-based pixel-wise binary classification model on each image.
arXiv Detail & Related papers (2023-02-28T14:01:01Z) - Active Learning of Piecewise Gaussian Process Surrogates [2.5399204134718096]
We develop a method for active learning of piecewise, Jump GP surrogates.
Jump GPs are continuous within, but discontinuous across, regions of a design space.
We develop an estimator for bias and variance of Jump GP models.
arXiv Detail & Related papers (2023-01-20T20:25:50Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Incremental Ensemble Gaussian Processes [53.3291389385672]
We propose an incremental ensemble (IE-) GP framework, where an EGP meta-learner employs an it ensemble of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary.
With each GP expert leveraging the random feature-based approximation to perform online prediction and model update with it scalability, the EGP meta-learner capitalizes on data-adaptive weights to synthesize the per-expert predictions.
The novel IE-GP is generalized to accommodate time-varying functions by modeling structured dynamics at the EGP meta-learner and within each GP learner.
arXiv Detail & Related papers (2021-10-13T15:11:25Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z) - Modulating Scalable Gaussian Processes for Expressive Statistical
Learning [25.356503463916816]
Gaussian process (GP) is interested in learning the statistical relationship between inputs and outputs, since it offers not only the prediction mean but also the associated variability.
This article studies new scalable GP paradigms including the non-stationary heteroscedastic GP, the mixture of GPs and the latent GP, which introduce additional latent variables to modulate the outputs or inputs in order to learn richer, non-Gaussian statistical representation.
arXiv Detail & Related papers (2020-08-29T06:41:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.