Squeeze-and-Remember Block
- URL: http://arxiv.org/abs/2410.00823v1
- Date: Tue, 1 Oct 2024 16:06:31 GMT
- Title: Squeeze-and-Remember Block
- Authors: Rinor Cakaj, Jens Mehnert, Bin Yang,
- Abstract summary: "Squeeze-and-Remember" (SR) block is a novel architectural unit that gives CNNs dynamic memory-like functionalities.
SR block selectively memorizes important features during training, and then adaptively re-applies these features during inference.
This improves the network's ability to make contextually informed predictions.
- Score: 4.150676163661315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional Neural Networks (CNNs) are important for many machine learning tasks. They are built with different types of layers: convolutional layers that detect features, dropout layers that help to avoid over-reliance on any single neuron, and residual layers that allow the reuse of features. However, CNNs lack a dynamic feature retention mechanism similar to the human brain's memory, limiting their ability to use learned information in new contexts. To bridge this gap, we introduce the "Squeeze-and-Remember" (SR) block, a novel architectural unit that gives CNNs dynamic memory-like functionalities. The SR block selectively memorizes important features during training, and then adaptively re-applies these features during inference. This improves the network's ability to make contextually informed predictions. Empirical results on ImageNet and Cityscapes datasets demonstrate the SR block's efficacy: integration into ResNet50 improved top-1 validation accuracy on ImageNet by 0.52% over dropout2d alone, and its application in DeepLab v3 increased mean Intersection over Union in Cityscapes by 0.20%. These improvements are achieved with minimal computational overhead. This show the SR block's potential to enhance the capabilities of CNNs in image processing tasks.
Related papers
- Single image super-resolution based on trainable feature matching attention network [0.0]
Convolutional Neural Networks (CNNs) have been widely employed for image Super-Resolution (SR)
We introduce Trainable Feature Matching (TFM) to amalgamate explicit feature learning into CNNs, augmenting their representation capabilities.
We also propose a streamlined variant called Same-size-divided Region-level Non-Local (SRNL) to alleviate the computational demands of non-local operations.
arXiv Detail & Related papers (2024-05-29T08:31:54Z) - Convolutional Neural Networks Exploiting Attributes of Biological
Neurons [7.3517426088986815]
Deep neural networks like Convolutional Neural Networks (CNNs) have emerged as front-runners, often surpassing human capabilities.
Here, we integrate the principles of biological neurons in certain layer(s) of CNNs.
We aim to extract image features to use as input to CNNs, hoping to enhance training efficiency and achieve better accuracy.
arXiv Detail & Related papers (2023-11-14T16:58:18Z) - Transferability of Convolutional Neural Networks in Stationary Learning
Tasks [96.00428692404354]
We introduce a novel framework for efficient training of convolutional neural networks (CNNs) for large-scale spatial problems.
We show that a CNN trained on small windows of such signals achieves a nearly performance on much larger windows without retraining.
Our results show that the CNN is able to tackle problems with many hundreds of agents after being trained with fewer than ten.
arXiv Detail & Related papers (2023-07-21T13:51:45Z) - No More Strided Convolutions or Pooling: A New CNN Building Block for
Low-Resolution Images and Small Objects [3.096615629099617]
Convolutional neural networks (CNNs) have made resounding success in many computer vision tasks.
However, their performance degrades rapidly on tougher tasks where images are of low resolution or objects are small.
We propose a new CNN building block called SPD-Conv in place of each strided convolution layer and each pooling layer.
arXiv Detail & Related papers (2022-08-07T05:09:18Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - When Residual Learning Meets Dense Aggregation: Rethinking the
Aggregation of Deep Neural Networks [57.0502745301132]
We propose Micro-Dense Nets, a novel architecture with global residual learning and local micro-dense aggregations.
Our micro-dense block can be integrated with neural architecture search based models to boost their performance.
arXiv Detail & Related papers (2020-04-19T08:34:52Z) - Improved Residual Networks for Image and Video Recognition [98.10703825716142]
Residual networks (ResNets) represent a powerful type of convolutional neural network (CNN) architecture.
We show consistent improvements in accuracy and learning convergence over the baseline.
Our proposed approach allows us to train extremely deep networks, while the baseline shows severe optimization issues.
arXiv Detail & Related papers (2020-04-10T11:09:50Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.