Improved Algorithms for Neural Active Learning
- URL: http://arxiv.org/abs/2210.00423v1
- Date: Sun, 2 Oct 2022 05:03:38 GMT
- Title: Improved Algorithms for Neural Active Learning
- Authors: Yikun Ban, Yuheng Zhang, Hanghang Tong, Arindam Banerjee, Jingrui He
- Abstract summary: We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
- Score: 74.89097665112621
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We improve the theoretical and empirical performance of
neural-network(NN)-based active learning algorithms for the non-parametric
streaming setting. In particular, we introduce two regret metrics by minimizing
the population loss that are more suitable in active learning than the one used
in state-of-the-art (SOTA) related work. Then, the proposed algorithm leverages
the powerful representation of NNs for both exploitation and exploration, has
the query decision-maker tailored for $k$-class classification problems with
the performance guarantee, utilizes the full feedback, and updates parameters
in a more practical and efficient manner. These careful designs lead to a
better regret upper bound, improving by a multiplicative factor $O(\log T)$ and
removing the curse of both input dimensionality and the complexity of the
function to be learned. Furthermore, we show that the algorithm can achieve the
same performance as the Bayes-optimal classifier in the long run under the
hard-margin setting in classification problems. In the end, we use extensive
experiments to evaluate the proposed algorithm and SOTA baselines, to show the
improved empirical performance.
Related papers
- Applying Incremental Learning in Binary-Addition-Tree Algorithm for Dynamic Binary-State Network Reliability [0.08158530638728499]
The Binary-Addition-Tree algorithm (BAT) is a powerful implicit enumeration method for solving network reliability and optimization problems.
By introducing incremental learning, we enable the BAT to adapt and improve its performance iteratively as it encounters new data or network changes.
arXiv Detail & Related papers (2024-09-24T04:13:03Z) - Neural Active Learning Beyond Bandits [69.99592173038903]
We study both stream-based and pool-based active learning with neural network approximations.
We propose two algorithms based on the newly designed exploitation and exploration neural networks for stream-based and pool-based active learning.
arXiv Detail & Related papers (2024-04-18T21:52:14Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Towards Diverse Evaluation of Class Incremental Learning: A Representation Learning Perspective [67.45111837188685]
Class incremental learning (CIL) algorithms aim to continually learn new object classes from incrementally arriving data.
We experimentally analyze neural network models trained by CIL algorithms using various evaluation protocols in representation learning.
arXiv Detail & Related papers (2022-06-16T11:44:11Z) - Large-scale Optimization of Partial AUC in a Range of False Positive
Rates [51.12047280149546]
The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning.
We develop an efficient approximated gradient descent method based on recent practical envelope smoothing technique.
Our proposed algorithm can also be used to minimize the sum of some ranked range loss, which also lacks efficient solvers.
arXiv Detail & Related papers (2022-03-03T03:46:18Z) - Analytically Tractable Inference in Deep Neural Networks [0.0]
Tractable Approximate Inference (TAGI) algorithm was shown to be a viable and scalable alternative to backpropagation for shallow fully-connected neural networks.
We are demonstrating how TAGI matches or exceeds the performance of backpropagation, for training classic deep neural network architectures.
arXiv Detail & Related papers (2021-03-09T14:51:34Z) - Benchmarking Simulation-Based Inference [5.3898004059026325]
Recent advances in probabilistic modelling have led to a large number of simulation-based inference algorithms which do not require numerical evaluation of likelihoods.
We provide a benchmark with inference tasks and suitable performance metrics, with an initial selection of algorithms.
We found that the choice of performance metric is critical, that even state-of-the-art algorithms have substantial room for improvement, and that sequential estimation improves sample efficiency.
arXiv Detail & Related papers (2021-01-12T18:31:22Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.