Feedback Coding for Active Learning
- URL: http://arxiv.org/abs/2103.00654v1
- Date: Sun, 28 Feb 2021 23:00:34 GMT
- Title: Feedback Coding for Active Learning
- Authors: Gregory Canal, Matthieu Bloch, Christopher Rozell
- Abstract summary: We develop an optimal transport-based feedback coding scheme for the task of active example selection.
We evaluate APM on a variety of datasets and demonstrate learning performance comparable to existing active learning methods.
- Score: 15.239252118069762
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The iterative selection of examples for labeling in active machine learning
is conceptually similar to feedback channel coding in information theory: in
both tasks, the objective is to seek a minimal sequence of actions to encode
information in the presence of noise. While this high-level overlap has been
previously noted, there remain open questions on how to best formulate active
learning as a communications system to leverage existing analysis and
algorithms in feedback coding. In this work, we formally identify and leverage
the structural commonalities between the two problems, including the
characterization of encoder and noisy channel components, to design a new
algorithm. Specifically, we develop an optimal transport-based feedback coding
scheme called Approximate Posterior Matching (APM) for the task of active
example selection and explore its application to Bayesian logistic regression,
a popular model in active learning. We evaluate APM on a variety of datasets
and demonstrate learning performance comparable to existing active learning
methods, at a reduced computational cost. These results demonstrate the
potential of directly deploying concepts from feedback channel coding to design
efficient active learning strategies.
Related papers
- MALADY: Multiclass Active Learning with Auction Dynamics on Graphs [0.9831489366502301]
We introduce the Multiclass Active Learning with Auction Dynamics on Graphs (MALADY) framework for efficient active learning.
We generalize the auction dynamics algorithm on similarity graphs for semi-supervised learning in [24] to incorporate a more general optimization functional.
We also introduce a novel active learning acquisition function that uses the dual variable of the auction algorithm to measure the uncertainty in the classifier to prioritize queries near the decision boundaries between different classes.
arXiv Detail & Related papers (2024-09-14T16:20:26Z) - The Predictive Forward-Forward Algorithm [79.07468367923619]
We propose the predictive forward-forward (PFF) algorithm for conducting credit assignment in neural systems.
We design a novel, dynamic recurrent neural system that learns a directed generative circuit jointly and simultaneously with a representation circuit.
PFF efficiently learns to propagate learning signals and updates synapses with forward passes only.
arXiv Detail & Related papers (2023-01-04T05:34:48Z) - Hierarchically Structured Task-Agnostic Continual Learning [0.0]
We take a task-agnostic view of continual learning and develop a hierarchical information-theoretic optimality principle.
We propose a neural network layer, called the Mixture-of-Variational-Experts layer, that alleviates forgetting by creating a set of information processing paths.
Our approach can operate in a task-agnostic way, i.e., it does not require task-specific knowledge, as is the case with many existing continual learning algorithms.
arXiv Detail & Related papers (2022-11-14T19:53:15Z) - Batch Active Learning from the Perspective of Sparse Approximation [12.51958241746014]
Active learning enables efficient model training by leveraging interactions between machine learning agents and human annotators.
We study and propose a novel framework that formulates batch active learning from the sparse approximation's perspective.
Our active learning method aims to find an informative subset from the unlabeled data pool such that the corresponding training loss function approximates its full data pool counterpart.
arXiv Detail & Related papers (2022-11-01T03:20:28Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Interactive Learning from Activity Description [11.068923430996575]
We present a novel interactive learning protocol that enables training request-fulfilling agents by verbally describing their activities.
Our protocol gives rise to a new family of interactive learning algorithms that offer complementary advantages against traditional algorithms like imitation learning (IL) and reinforcement learning (RL)
We develop an algorithm that practically implements this protocol and employ it to train agents in two challenging request-fulfilling problems using purely language-description feedback.
arXiv Detail & Related papers (2021-02-13T22:51:11Z) - Semi-supervised Batch Active Learning via Bilevel Optimization [89.37476066973336]
We formulate our approach as a data summarization problem via bilevel optimization.
We show that our method is highly effective in keyword detection tasks in the regime when only few labeled samples are available.
arXiv Detail & Related papers (2020-10-19T16:53:24Z) - Neural Function Modules with Sparse Arguments: A Dynamic Approach to
Integrating Information across Layers [84.57980167400513]
Neural Function Modules (NFM) aims to introduce the same structural capability into deep learning.
Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems.
The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm.
arXiv Detail & Related papers (2020-10-15T20:43:17Z) - Information Theoretic Meta Learning with Gaussian Processes [74.54485310507336]
We formulate meta learning using information theoretic concepts; namely, mutual information and the information bottleneck.
By making use of variational approximations to the mutual information, we derive a general and tractable framework for meta learning.
arXiv Detail & Related papers (2020-09-07T16:47:30Z) - Bayesian active learning for production, a systematic study and a
reusable library [85.32971950095742]
In this paper, we analyse the main drawbacks of current active learning techniques.
We do a systematic study on the effects of the most common issues of real-world datasets on the deep active learning process.
We derive two techniques that can speed up the active learning loop such as partial uncertainty sampling and larger query size.
arXiv Detail & Related papers (2020-06-17T14:51:11Z) - Model-based Multi-Agent Reinforcement Learning with Cooperative
Prioritized Sweeping [4.5497948012757865]
We present a new model-based reinforcement learning algorithm, Cooperative Prioritized Sweeping.
The algorithm allows for sample-efficient learning on large problems by exploiting a factorization to approximate the value function.
Our method outperforms the state-of-the-art algorithm sparse cooperative Q-learning algorithm, both on the well-known SysAdmin benchmark and randomized environments.
arXiv Detail & Related papers (2020-01-15T19:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.