Active Learning of Deep Neural Networks via Gradient-Free Cutting Planes
- URL: http://arxiv.org/abs/2410.02145v3
- Date: Mon, 14 Oct 2024 23:37:50 GMT
- Title: Active Learning of Deep Neural Networks via Gradient-Free Cutting Planes
- Authors: Erica Zhang, Fangzhao Zhang, Mert Pilanci,
- Abstract summary: In this work, we investigate an active learning scheme via a novel cutting-plane method for ReLULU networks of arbitrary depth.
We demonstrate that these algorithms can be extended to deep neural networks despite their non-linear convergence.
We exemplify the effectiveness of our proposed active learning method against popular deep active learning baselines via both data experiments and classification on real datasets.
- Score: 40.68266398473983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning methods aim to improve sample complexity in machine learning. In this work, we investigate an active learning scheme via a novel gradient-free cutting-plane training method for ReLU networks of arbitrary depth. We demonstrate, for the first time, that cutting-plane algorithms, traditionally used in linear models, can be extended to deep neural networks despite their nonconvexity and nonlinear decision boundaries. Our results demonstrate that these methods provide a promising alternative to the commonly employed gradient-based optimization techniques in large-scale neural networks. Moreover, this training method induces the first deep active learning scheme known to achieve convergence guarantees. We exemplify the effectiveness of our proposed active learning method against popular deep active learning baselines via both synthetic data experiments and sentimental classification task on real datasets.
Related papers
- A Novel Method for improving accuracy in neural network by reinstating
traditional back propagation technique [0.0]
We propose a novel instant parameter update methodology that eliminates the need for computing gradients at each layer.
Our approach accelerates learning, avoids the vanishing gradient problem, and outperforms state-of-the-art methods on benchmark data sets.
arXiv Detail & Related papers (2023-08-09T16:41:00Z) - Enhanced quantum state preparation via stochastic prediction of neural
network [0.8287206589886881]
In this paper, we explore an intriguing avenue for enhancing algorithm effectiveness through exploiting the knowledge blindness of neural network.
Our approach centers around a machine learning algorithm utilized for preparing arbitrary quantum states in a semiconductor double quantum dot system.
By leveraging prediction generated by the neural network, we are able to guide the optimization process to escape local optima.
arXiv Detail & Related papers (2023-07-27T09:11:53Z) - Online Network Source Optimization with Graph-Kernel MAB [62.6067511147939]
We propose Grab-UCB, a graph- kernel multi-arms bandit algorithm to learn online the optimal source placement in large scale networks.
We describe the network processes with an adaptive graph dictionary model, which typically leads to sparse spectral representations.
We derive the performance guarantees that depend on network parameters, which further influence the learning curve of the sequential decision strategy.
arXiv Detail & Related papers (2023-07-07T15:03:42Z) - Deep Active Learning with Structured Neural Depth Search [18.180995603975422]
Active-iNAS trains several models and selects the model with the best generalization performance for querying the subsequent samples after each active learning cycle.
We propose a novel active strategy with the method called structured variational inference (SVI) or structured neural depth search (SNDS)
At the same time, we theoretically demonstrate that the current VI-based methods based on the mean-field assumption could lead to poor performance.
arXiv Detail & Related papers (2023-06-05T12:00:12Z) - The Integrated Forward-Forward Algorithm: Integrating Forward-Forward
and Shallow Backpropagation With Local Losses [0.0]
We propose an integrated method that combines the strengths of both FFA and shallow backpropagation.
We show that training neural networks with the Integrated Forward-Forward Algorithm has the potential of generating neural networks with advantageous features like robustness.
arXiv Detail & Related papers (2023-05-22T12:10:47Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - DLCFT: Deep Linear Continual Fine-Tuning for General Incremental
Learning [29.80680408934347]
We propose an alternative framework to incremental learning where we continually fine-tune the model from a pre-trained representation.
Our method takes advantage of linearization technique of a pre-trained neural network for simple and effective continual learning.
We show that our method can be applied to general continual learning settings, we evaluate our method in data-incremental, task-incremental, and class-incremental learning problems.
arXiv Detail & Related papers (2022-08-17T06:58:14Z) - Localized Persistent Homologies for more Effective Deep Learning [60.78456721890412]
We introduce an approach that relies on a new filtration function to account for location during network training.
We demonstrate experimentally on 2D images of roads and 3D image stacks of neuronal processes that networks trained in this manner are better at recovering the topology of the curvilinear structures they extract.
arXiv Detail & Related papers (2021-10-12T19:28:39Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.