Biologically inspired deep residual networks for computer vision
applications
- URL: http://arxiv.org/abs/2205.02551v1
- Date: Thu, 5 May 2022 10:23:43 GMT
- Title: Biologically inspired deep residual networks for computer vision
applications
- Authors: Prathibha Varghese and Dr. G. Arockia Selva Saroja
- Abstract summary: We propose a biologically inspired deep residual neural network where the hexagonal convolutions are introduced along the skip connections.
The proposed approach advances the baseline image classification accuracy of vanilla ResNet architectures.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural network has been ensured as a key technology in the field of many
challenging and vigorously researched computer vision tasks. Furthermore,
classical ResNet is thought to be a state-of-the-art convolutional neural
network (CNN) and was observed to capture features which can have good
generalization ability. In this work, we propose a biologically inspired deep
residual neural network where the hexagonal convolutions are introduced along
the skip connections. The performance of different ResNet variants using square
and hexagonal convolution are evaluated with the competitive training strategy
mentioned by [1]. We show that the proposed approach advances the baseline
image classification accuracy of vanilla ResNet architectures on CIFAR-10 and
the same was observed over multiple subsets of the ImageNet 2012 dataset. We
observed an average improvement by 1.35% and 0.48% on baseline top-1 accuracies
for ImageNet 2012 and CIFAR-10, respectively. The proposed biologically
inspired deep residual networks were observed to have improved generalized
performance and this could be a potential research direction to improve the
discriminative ability of state-of-the-art image classification networks.
Related papers
- Planarian Neural Networks: Evolutionary Patterns from Basic Bilateria Shaping Modern Artificial Neural Network Architectures [7.054776300100835]
The aim of this study is to improve the image classification performance of ANNs via a novel approach inspired by the biological nervous system architecture of planarians.
The proposed planarian neural architecture-based neural network was evaluated on the CIFAR-10 and CIFAR-100 datasets.
arXiv Detail & Related papers (2025-01-08T18:59:36Z) - Advancing the Biological Plausibility and Efficacy of Hebbian Convolutional Neural Networks [0.0]
The research presented in this paper advances the integration of Hebbian learning into Convolutional Neural Networks (CNNs) for image processing.
Hebbian learning operates on local unsupervised neural information to form feature representations.
Results showed clear indications of sparse hierarchical learning through increasingly complex and receptive fields.
arXiv Detail & Related papers (2025-01-06T12:29:37Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Retinotopic Mapping Enhances the Robustness of Convolutional Neural Networks [0.0]
This study investigates whether retinotopic mapping, a critical component of foveated vision, can enhance image categorization and localization performance.
Renotopic mapping was integrated into the inputs of standard off-the-shelf convolutional neural networks (CNNs)
Surprisingly, the retinotopically mapped network achieved comparable performance in classification.
arXiv Detail & Related papers (2024-02-23T18:15:37Z) - Convolutional Neural Generative Coding: Scaling Predictive Coding to
Natural Images [79.07468367923619]
We develop convolutional neural generative coding (Conv-NGC)
We implement a flexible neurobiologically-motivated algorithm that progressively refines latent state maps.
We study the effectiveness of our brain-inspired neural system on the tasks of reconstruction and image denoising.
arXiv Detail & Related papers (2022-11-22T06:42:41Z) - Increasing the Accuracy of a Neural Network Using Frequency Selective
Mesh-to-Grid Resampling [4.211128681972148]
We propose the use of keypoint frequency selective mesh-to-grid resampling (FSMR) for the processing of input data for neural networks.
We show that depending on the network architecture and classification task the application of FSMR during training aids learning process.
The classification accuracy can be increased by up to 4.31 percentage points for ResNet50 and the Oxflower17 dataset.
arXiv Detail & Related papers (2022-09-28T21:34:47Z) - CPFN: Cascaded Primitive Fitting Networks for High-Resolution Point
Clouds [51.47100091540298]
We present Cascaded Primitive Fitting Networks (CPFN) that relies on an adaptive patch sampling network to assemble detection results of global and local primitive detection networks.
CPFN improves the state-of-the-art SPFN performance by 13-14% on high-resolution point cloud datasets and specifically improves the detection of fine-scale primitives by 20-22%.
arXiv Detail & Related papers (2021-08-31T23:27:33Z) - Implementing a foveal-pit inspired filter in a Spiking Convolutional
Neural Network: a preliminary study [0.0]
We have presented a Spiking Convolutional Neural Network (SCNN) that incorporates retinal foveal-pit inspired Difference of Gaussian filters and rank-order encoding.
The model is trained using a variant of the backpropagation algorithm adapted to work with spiking neurons, as implemented in the Nengo library.
The network has achieved up to 90% accuracy, where loss is calculated using the cross-entropy function.
arXiv Detail & Related papers (2021-05-29T15:28:30Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - Kernel-Based Smoothness Analysis of Residual Networks [85.20737467304994]
Residual networks (ResNets) stand out among these powerful modern architectures.
In this paper, we show another distinction between the two models, namely, a tendency of ResNets to promote smoothers than gradients.
arXiv Detail & Related papers (2020-09-21T16:32:04Z) - Interpretation of ResNet by Visualization of Preferred Stimulus in
Receptive Fields [2.28438857884398]
We investigate the receptive fields of a ResNet on the classification task in ImageNet.
We find that ResNet has orientation selective neurons and double opponent color neurons.
In addition, we suggest that some inactive neurons in the first layer of ResNet affect the classification task.
arXiv Detail & Related papers (2020-06-02T14:25:26Z) - Large-scale spatiotemporal photonic reservoir computer for image
classification [0.8701566919381222]
We propose a scalable photonic architecture for implementation of feedforward and recurrent neural networks to perform the classification of handwritten digits.
Our experiment exploits off-the-shelf optical and electronic components to currently achieve a network size of 16,384 nodes.
arXiv Detail & Related papers (2020-04-06T10:22:31Z) - ReActNet: Towards Precise Binary Neural Network with Generalized
Activation Functions [76.05981545084738]
We propose several ideas for enhancing a binary network to close its accuracy gap from real-valued networks without incurring any additional computational cost.
We first construct a baseline network by modifying and binarizing a compact real-valued network with parameter-free shortcuts.
We show that the proposed ReActNet outperforms all the state-of-the-arts by a large margin.
arXiv Detail & Related papers (2020-03-07T02:12:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.