CP-CNN: Core-Periphery Principle Guided Convolutional Neural Network
- URL: http://arxiv.org/abs/2304.10515v1
- Date: Mon, 27 Mar 2023 03:59:43 GMT
- Title: CP-CNN: Core-Periphery Principle Guided Convolutional Neural Network
- Authors: Lin Zhao, Haixing Dai, Zihao Wu, Dajiang Zhu, Tianming Liu
- Abstract summary: We implement the core-periphery principle in the design of network wiring patterns and the sparsification of the convolution operation.
Our work contributes to the growing field of brain-inspired AI by incorporating insights from the human brain into the design of neural networks.
- Score: 9.015666133509857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The evolution of convolutional neural networks (CNNs) can be largely
attributed to the design of its architecture, i.e., the network wiring pattern.
Neural architecture search (NAS) advances this by automating the search for the
optimal network architecture, but the resulting network instance may not
generalize well in different tasks. To overcome this, exploring network design
principles that are generalizable across tasks is a more practical solution. In
this study, We explore a novel brain-inspired design principle based on the
core-periphery property of the human brain network to guide the design of CNNs.
Our work draws inspiration from recent studies suggesting that artificial and
biological neural networks may have common principles in optimizing network
architecture. We implement the core-periphery principle in the design of
network wiring patterns and the sparsification of the convolution operation.
The resulting core-periphery principle guided CNNs (CP-CNNs) are evaluated on
three different datasets. The experiments demonstrate the effectiveness and
superiority compared to CNNs and ViT-based methods. Overall, our work
contributes to the growing field of brain-inspired AI by incorporating insights
from the human brain into the design of neural networks.
Related papers
- Cartesian Genetic Programming Approach for Designing Convolutional Neural Networks [0.0]
In designing artificial neural networks, one crucial aspect of the innovative approach is suggesting a novel neural architecture.
In this work, we use pure Genetic Programming Approach to design CNNs, which employs only one genetic operation.
In the course of preliminary experiments, our methodology yields promising results.
arXiv Detail & Related papers (2024-09-30T18:10:06Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Deep Neural Networks as Complex Networks [1.704936863091649]
We use Complex Network Theory to represent Deep Neural Networks (DNNs) as directed weighted graphs.
We introduce metrics to study DNNs as dynamical systems, with a granularity that spans from weights to layers, including neurons.
We show that our metrics discriminate low vs. high performing networks.
arXiv Detail & Related papers (2022-09-12T16:26:04Z) - Deep Reinforcement Learning Guided Graph Neural Networks for Brain
Network Analysis [61.53545734991802]
We propose a novel brain network representation framework, namely BN-GNN, which searches for the optimal GNN architecture for each brain network.
Our proposed BN-GNN improves the performance of traditional GNNs on different brain network analysis tasks.
arXiv Detail & Related papers (2022-03-18T07:05:27Z) - Max and Coincidence Neurons in Neural Networks [0.07614628596146598]
We optimize networks containing models of the max and coincidence neurons using neural architecture search.
We analyze the structure, operations, and neurons of optimized networks to develop a signal-processing ResNet.
The developed network achieves an average of 2% improvement in accuracy and a 25% improvement in network size across a variety of datasets.
arXiv Detail & Related papers (2021-10-04T07:13:50Z) - What can linearized neural networks actually say about generalization? [67.83999394554621]
In certain infinitely-wide neural networks, the neural tangent kernel (NTK) theory fully characterizes generalization.
We show that the linear approximations can indeed rank the learning complexity of certain tasks for neural networks.
Our work provides concrete examples of novel deep learning phenomena which can inspire future theoretical research.
arXiv Detail & Related papers (2021-06-12T13:05:11Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - Deep neural networks for the evaluation and design of photonic devices [0.0]
Review: How deep neural networks can learn from training sets and operate as high-speed surrogate electromagnetic solvers.
Fundamental data sciences framed within the context of photonics will also be discussed.
arXiv Detail & Related papers (2020-06-30T19:52:54Z) - Emergence of Network Motifs in Deep Neural Networks [0.35911228556176483]
We show that network science tools can be successfully applied to the study of artificial neural networks.
In particular, we study the emergence of network motifs in multi-layer perceptrons.
arXiv Detail & Related papers (2019-12-27T17:05:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.