An Information-Theoretic Framework for Unifying Active Learning Problems
- URL: http://arxiv.org/abs/2012.10695v1
- Date: Sat, 19 Dec 2020 14:22:48 GMT
- Title: An Information-Theoretic Framework for Unifying Active Learning Problems
- Authors: Quoc Phong Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet
- Abstract summary: This paper presents an information-theoretic framework for unifying active learning problems.
We first introduce a novel active learning criterion that subsumes an existing LSE algorithm.
By exploiting the relationship between LSE and BO, we design a competitive information-theoretic acquisition function for BO.
- Score: 44.758281991246825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents an information-theoretic framework for unifying active
learning problems: level set estimation (LSE), Bayesian optimization (BO), and
their generalized variant. We first introduce a novel active learning criterion
that subsumes an existing LSE algorithm and achieves state-of-the-art
performance in LSE problems with a continuous input domain. Then, by exploiting
the relationship between LSE and BO, we design a competitive
information-theoretic acquisition function for BO that has interesting
connections to upper confidence bound and max-value entropy search (MES). The
latter connection reveals a drawback of MES which has important implications on
not only MES but also on other MES-based acquisition functions. Finally, our
unifying information-theoretic framework can be applied to solve a generalized
problem of LSE and BO involving multiple level sets in a data-efficient manner.
We empirically evaluate the performance of our proposed algorithms using
synthetic benchmark functions, a real-world dataset, and in hyperparameter
tuning of machine learning models.
Related papers
- LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning [56.273799410256075]
The framework combines Monte Carlo Tree Search (MCTS) with iterative Self-Refine to optimize the reasoning path.
The framework has been tested on general and advanced benchmarks, showing superior performance in terms of search efficiency and problem-solving capability.
arXiv Detail & Related papers (2024-10-03T18:12:29Z) - Learning Objective-Specific Active Learning Strategies with Attentive
Neural Processes [72.75421975804132]
Learning Active Learning (LAL) suggests to learn the active learning strategy itself, allowing it to adapt to the given setting.
We propose a novel LAL method for classification that exploits symmetry and independence properties of the active learning problem.
Our approach is based on learning from a myopic oracle, which gives our model the ability to adapt to non-standard objectives.
arXiv Detail & Related papers (2023-09-11T14:16:37Z) - Batch Active Learning from the Perspective of Sparse Approximation [12.51958241746014]
Active learning enables efficient model training by leveraging interactions between machine learning agents and human annotators.
We study and propose a novel framework that formulates batch active learning from the sparse approximation's perspective.
Our active learning method aims to find an informative subset from the unlabeled data pool such that the corresponding training loss function approximates its full data pool counterpart.
arXiv Detail & Related papers (2022-11-01T03:20:28Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Look-Ahead Acquisition Functions for Bernoulli Level Set Estimation [9.764638397706717]
We derive analytic expressions for look-ahead posteriors of sublevel set membership.
We show how these lead to analytic expressions for a class of look-ahead LSE acquisition functions.
arXiv Detail & Related papers (2022-03-18T05:25:35Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - High Dimensional Level Set Estimation with Bayesian Neural Network [58.684954492439424]
This paper proposes novel methods to solve the high dimensional Level Set Estimation problems using Bayesian Neural Networks.
For each problem, we derive the corresponding theoretic information based acquisition function to sample the data points.
Numerical experiments on both synthetic and real-world datasets show that our proposed method can achieve better results compared to existing state-of-the-art approaches.
arXiv Detail & Related papers (2020-12-17T23:21:53Z) - Rank-Based Multi-task Learning for Fair Regression [9.95899391250129]
We develop a novel learning approach for multi-taskart regression models based on a biased dataset.
We use a popular non-parametric oracle-based non-world multipliers dataset.
arXiv Detail & Related papers (2020-09-23T22:32:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.