A Gradient Boosting Approach for Training Convolutional and Deep Neural
Networks
- URL: http://arxiv.org/abs/2302.11327v2
- Date: Thu, 23 Feb 2023 09:13:03 GMT
- Title: A Gradient Boosting Approach for Training Convolutional and Deep Neural
Networks
- Authors: Seyedsaman Emami and Gonzalo Mart\'inez-Mu\~noz
- Abstract summary: We introduce two procedures for training Convolutional Neural Networks (CNNs) and Deep Neural Network based on Gradient Boosting (GB)
The presented models show superior performance in terms of classification accuracy with respect to standard CNN and Deep-NN with the same architectures.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning has revolutionized the computer vision and image classification
domains. In this context Convolutional Neural Networks (CNNs) based
architectures are the most widely applied models. In this article, we
introduced two procedures for training Convolutional Neural Networks (CNNs) and
Deep Neural Network based on Gradient Boosting (GB), namely GB-CNN and GB-DNN.
These models are trained to fit the gradient of the loss function or
pseudo-residuals of previous models. At each iteration, the proposed method
adds one dense layer to an exact copy of the previous deep NN model. The
weights of the dense layers trained on previous iterations are frozen to
prevent over-fitting, permitting the model to fit the new dense as well as to
fine-tune the convolutional layers (for GB-CNN) while still utilizing the
information already learned. Through extensive experimentation on different
2D-image classification and tabular datasets, the presented models show
superior performance in terms of classification accuracy with respect to
standard CNN and Deep-NN with the same architectures.
Related papers
- Model Parallel Training and Transfer Learning for Convolutional Neural Networks by Domain Decomposition [0.0]
Deep convolutional neural networks (CNNs) have been shown to be very successful in a wide range of image processing applications.
Due to their increasing number of model parameters and an increasing availability of large amounts of training data, parallelization strategies to efficiently train complex CNNs are necessary.
arXiv Detail & Related papers (2024-08-26T17:35:01Z) - CNN2GNN: How to Bridge CNN with GNN [59.42117676779735]
We propose a novel CNN2GNN framework to unify CNN and GNN together via distillation.
The performance of distilled boosted'' two-layer GNN on Mini-ImageNet is much higher than CNN containing dozens of layers such as ResNet152.
arXiv Detail & Related papers (2024-04-23T08:19:08Z) - Unveiling the Unseen: Identifiable Clusters in Trained Depthwise
Convolutional Kernels [56.69755544814834]
Recent advances in depthwise-separable convolutional neural networks (DS-CNNs) have led to novel architectures.
This paper reveals another striking property of DS-CNN architectures: discernible and explainable patterns emerge in their trained depthwise convolutional kernels in all layers.
arXiv Detail & Related papers (2024-01-25T19:05:53Z) - A Domain Decomposition-Based CNN-DNN Architecture for Model Parallel Training Applied to Image Recognition Problems [0.0]
A novel CNN-DNN architecture is proposed that naturally supports a model parallel training strategy.
The proposed approach can significantly accelerate the required training time compared to the global model.
Results show that the proposed approach can also help to improve the accuracy of the underlying classification problem.
arXiv Detail & Related papers (2023-02-13T18:06:59Z) - Improved Convergence Guarantees for Shallow Neural Networks [91.3755431537592]
We prove convergence of depth 2 neural networks, trained via gradient descent, to a global minimum.
Our model has the following features: regression with quadratic loss function, fully connected feedforward architecture, RelU activations, Gaussian data instances, adversarial labels.
They strongly suggest that, at least in our model, the convergence phenomenon extends well beyond the NTK regime''
arXiv Detail & Related papers (2022-12-05T14:47:52Z) - Variational Tensor Neural Networks for Deep Learning [0.0]
We propose an integration of tensor networks (TN) into deep neural networks (NNs)
This in turn, results in a scalable tensor neural network (TNN) architecture capable of efficient training over a large parameter space.
We validate the accuracy and efficiency of our method by designing TNN models and providing benchmark results for linear and non-linear regressions, data classification and image recognition on MNIST handwritten digits.
arXiv Detail & Related papers (2022-11-26T20:24:36Z) - Self-interpretable Convolutional Neural Networks for Text Classification [5.55878488884108]
This paper develops an approach for interpreting convolutional neural networks for text classification problems by exploiting the local-linear models inherent in ReLU-DNNs.
We show that our proposed technique produce parsimonious models that are self-interpretable and have comparable performance with respect to a more complex CNN model.
arXiv Detail & Related papers (2021-05-18T15:19:59Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z) - Adaptive Signal Variances: CNN Initialization Through Modern
Architectures [0.7646713951724012]
Deep convolutional neural networks (CNN) have achieved the unwavering confidence in its performance on image processing tasks.
CNN practitioners widely understand the fact that the stability of learning depends on how to initialize the model parameters in each layer.
arXiv Detail & Related papers (2020-08-16T11:26:29Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.