Tricks and Plugins to GBM on Images and Sequences
- URL: http://arxiv.org/abs/2203.00761v1
- Date: Tue, 1 Mar 2022 21:59:00 GMT
- Title: Tricks and Plugins to GBM on Images and Sequences
- Authors: Biyi Fang, Jean Utke, Diego Klabjan
- Abstract summary: We propose a new algorithm for boosting Deep Convolutional Neural Networks (BoostCNN) to combine the merits of dynamic feature selection and BoostCNN.
We also propose a set of algorithms to incorporate boosting weights into a deep learning architecture based on a least squares objective function.
Experiments show that the proposed methods outperform benchmarks on several fine-grained classification tasks.
- Score: 18.939336393665553
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Convolutional neural networks (CNNs) and transformers, which are composed of
multiple processing layers and blocks to learn the representations of data with
multiple abstract levels, are the most successful machine learning models in
recent years. However, millions of parameters and many blocks make them
difficult to be trained, and sometimes several days or weeks are required to
find an ideal architecture or tune the parameters. Within this paper, we
propose a new algorithm for boosting Deep Convolutional Neural Networks
(BoostCNN) to combine the merits of dynamic feature selection and BoostCNN, and
another new family of algorithms combining boosting and transformers. To learn
these new models, we introduce subgrid selection and importance sampling
strategies and propose a set of algorithms to incorporate boosting weights into
a deep learning architecture based on a least squares objective function. These
algorithms not only reduce the required manual effort for finding an
appropriate network architecture but also result in superior performance and
lower running time. Experiments show that the proposed methods outperform
benchmarks on several fine-grained classification tasks.
Related papers
- SimQ-NAS: Simultaneous Quantization Policy and Neural Architecture
Search [6.121126813817338]
Recent one-shot Neural Architecture Search algorithms rely on training a hardware-agnostic super-network tailored to a specific task and then extracting efficient sub-networks for different hardware platforms.
We show that by using multi-objective search algorithms paired with lightly trained predictors, we can efficiently search for both the sub-network architecture and the corresponding quantization policy.
arXiv Detail & Related papers (2023-12-19T22:08:49Z) - Neuroevolution of Recurrent Architectures on Control Tasks [3.04585143845864]
We implement a massively parallel evolutionary algorithm and run experiments on all 19 OpenAI Gym state-based reinforcement learning control tasks.
We find that dynamic agents match or exceed the performance of gradient-based agents while utilizing orders of magnitude fewer parameters.
arXiv Detail & Related papers (2023-04-03T16:29:18Z) - OFA$^2$: A Multi-Objective Perspective for the Once-for-All Neural
Architecture Search [79.36688444492405]
Once-for-All (OFA) is a Neural Architecture Search (NAS) framework designed to address the problem of searching efficient architectures for devices with different resources constraints.
We aim to give one step further in the search for efficiency by explicitly conceiving the search stage as a multi-objective optimization problem.
arXiv Detail & Related papers (2023-03-23T21:30:29Z) - Speeding Up EfficientNet: Selecting Update Blocks of Convolutional
Neural Networks using Genetic Algorithm in Transfer Learning [0.0]
We devise a genetic algorithm to select blocks of layers for updating the parameters.
We show that our algorithm yields similar or better results than the baseline in terms of accuracy.
We also devise a metric called block importance to measure efficacy of each block as update block.
arXiv Detail & Related papers (2023-03-01T06:35:29Z) - A Continuous Optimisation Benchmark Suite from Neural Network Regression [0.0]
Training neural networks is an optimisation task that has gained prominence with the recent successes of deep learning.
gradient descent variants are by far the most common choice with their trusted good performance on large-scale machine learning tasks.
We contribute CORNN, a suite for benchmarking the performance of any continuous black-box algorithm on neural network training problems.
arXiv Detail & Related papers (2021-09-12T20:24:11Z) - Efficient Feature Transformations for Discriminative and Generative
Continual Learning [98.10425163678082]
We propose a simple task-specific feature map transformation strategy for continual learning.
Theses provide powerful flexibility for learning new tasks, achieved with minimal parameters added to the base architecture.
We demonstrate the efficacy and efficiency of our method with an extensive set of experiments in discriminative (CIFAR-100 and ImageNet-1K) and generative sequences of tasks.
arXiv Detail & Related papers (2021-03-25T01:48:14Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Online Sequential Extreme Learning Machines: Features Combined From
Hundreds of Midlayers [0.0]
In this paper, we develop an algorithm called hierarchal online sequential learning algorithm (H-OS-ELM)
The algorithm can learn chunk by chunk with fixed or varying block size.
arXiv Detail & Related papers (2020-06-12T00:50:04Z) - Binarizing MobileNet via Evolution-based Searching [66.94247681870125]
We propose a use of evolutionary search to facilitate the construction and training scheme when binarizing MobileNet.
Inspired by one-shot architecture search frameworks, we manipulate the idea of group convolution to design efficient 1-Bit Convolutional Neural Networks (CNNs)
Our objective is to come up with a tiny yet efficient binary neural architecture by exploring the best candidates of the group convolution.
arXiv Detail & Related papers (2020-05-13T13:25:51Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.