Deep Residual Compensation Convolutional Network without Backpropagation
- URL: http://arxiv.org/abs/2301.11663v1
- Date: Fri, 27 Jan 2023 11:45:09 GMT
- Title: Deep Residual Compensation Convolutional Network without Backpropagation
- Authors: Mubarakah Alotaibi, Richard Wilson
- Abstract summary: We introduce a residual compensation convolutional network, which is the first PCANet-like network trained with hundreds of layers.
To correct the classification errors, we train each layer with new labels derived from the residual information of all its preceding layers.
Our experiments show that our deep network outperforms all existing PCANet-like networks and is competitive with several traditional gradient-based models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: PCANet and its variants provided good accuracy results for classification
tasks. However, despite the importance of network depth in achieving good
classification accuracy, these networks were trained with a maximum of nine
layers. In this paper, we introduce a residual compensation convolutional
network, which is the first PCANet-like network trained with hundreds of layers
while improving classification accuracy. The design of the proposed network
consists of several convolutional layers, each followed by post-processing
steps and a classifier. To correct the classification errors and significantly
increase the network's depth, we train each layer with new labels derived from
the residual information of all its preceding layers. This learning mechanism
is accomplished by traversing the network's layers in a single forward pass
without backpropagation or gradient computations. Our experiments on four
distinct classification benchmarks (MNIST, CIFAR-10, CIFAR-100, and
TinyImageNet) show that our deep network outperforms all existing PCANet-like
networks and is competitive with several traditional gradient-based models.
Related papers
- Neural Collapse in the Intermediate Hidden Layers of Classification
Neural Networks [0.0]
(NC) gives a precise description of the representations of classes in the final hidden layer of classification neural networks.
In the present paper, we provide the first comprehensive empirical analysis of the emergence of (NC) in the intermediate hidden layers.
arXiv Detail & Related papers (2023-08-05T01:19:38Z) - Hidden Classification Layers: Enhancing linear separability between
classes in neural networks layers [0.0]
We investigate the impact on deep network performances of a training approach.
We propose a neural network architecture which induces an error function involving the outputs of all the network layers.
arXiv Detail & Related papers (2023-06-09T10:52:49Z) - Diffused Redundancy in Pre-trained Representations [98.55546694886819]
We take a closer look at how features are encoded in pre-trained representations.
We find that learned representations in a given layer exhibit a degree of diffuse redundancy.
Our findings shed light on the nature of representations learned by pre-trained deep neural networks.
arXiv Detail & Related papers (2023-05-31T21:00:50Z) - Layer Ensembles [95.42181254494287]
We introduce a method for uncertainty estimation that considers a set of independent categorical distributions for each layer of the network.
We show that the method can be further improved by ranking samples, resulting in models that require less memory and time to run.
arXiv Detail & Related papers (2022-10-10T17:52:47Z) - Deep Networks from the Principle of Rate Reduction [32.87280757001462]
This work attempts to interpret modern deep (convolutional) networks from the principles of rate reduction and (shift) invariant classification.
We show that the basic iterative ascent gradient scheme for optimizing the rate reduction of learned features naturally leads to a multi-layer deep network, one iteration per layer.
All components of this "white box" network have precise optimization, statistical, and geometric interpretation.
arXiv Detail & Related papers (2020-10-27T06:01:43Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Semantic Drift Compensation for Class-Incremental Learning [48.749630494026086]
Class-incremental learning of deep networks sequentially increases the number of classes to be classified.
We propose a new method to estimate the drift, called semantic drift, of features and compensate for it without the need of any exemplars.
arXiv Detail & Related papers (2020-04-01T13:31:19Z) - Convolutional Networks with Dense Connectivity [59.30634544498946]
We introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion.
For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers.
We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks.
arXiv Detail & Related papers (2020-01-08T06:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.