Introducing topography in convolutional neural networks
- URL: http://arxiv.org/abs/2211.13152v1
- Date: Fri, 28 Oct 2022 13:20:31 GMT
- Title: Introducing topography in convolutional neural networks
- Authors: Maxime Poli, Emmanuel Dupoux, Rachid Riad
- Abstract summary: We propose a new topographic inductive bias in Convolutional Neural Networks (CNNs)
We benchmarked our new method on 4 datasets and 3 models in vision and audio tasks.
Our approach provides a new avenue to obtain models that are more memory efficient while maintaining better accuracy.
- Score: 11.595591429581546
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Parts of the brain that carry sensory tasks are organized topographically:
nearby neurons are responsive to the same properties of input signals. Thus, in
this work, inspired by the neuroscience literature, we proposed a new
topographic inductive bias in Convolutional Neural Networks (CNNs). To achieve
this, we introduced a new topographic loss and an efficient implementation to
topographically organize each convolutional layer of any CNN. We benchmarked
our new method on 4 datasets and 3 models in vision and audio tasks and showed
equivalent performance to all benchmarks. Besides, we also showcased the
generalizability of our topographic loss with how it can be used with different
topographic organizations in CNNs. Finally, we demonstrated that adding the
topographic inductive bias made CNNs more resistant to pruning. Our approach
provides a new avenue to obtain models that are more memory efficient while
maintaining better accuracy.
Related papers
- TDSNNs: Competitive Topographic Deep Spiking Neural Networks for Visual Cortex Modeling [1.732019193517103]
We propose a novel Spatio-Temporal Constraints loss function for topographic deep spiking neural networks (SNNs)<n>Our results show that STC effectively generates representative topographic features across simulated visual cortical areas.<n>We also reveal that topographic organization facilitates efficient and stable temporal information processing via the spike mechanism in TDSNNs.
arXiv Detail & Related papers (2025-08-06T09:53:39Z) - CNN2GNN: How to Bridge CNN with GNN [59.42117676779735]
We propose a novel CNN2GNN framework to unify CNN and GNN together via distillation.
The performance of distilled boosted'' two-layer GNN on Mini-ImageNet is much higher than CNN containing dozens of layers such as ResNet152.
arXiv Detail & Related papers (2024-04-23T08:19:08Z) - End-to-end topographic networks as models of cortical map formation and
human visual behaviour: moving beyond convolutions [0.29687381456164]
We develop All-Topographic Neural Networks (All-TNNs) to model the organisation of the primate visual system.
We show that All-TNNs significantly better align with human behaviour than previous state-of-the-art convolutional models due to their topographic nature.
All-TNNs thereby mark an important step forward in understanding the spatial organisation of the visual brain and how it mediates visual behaviour.
arXiv Detail & Related papers (2023-08-18T10:03:51Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Graph Neural Operators for Classification of Spatial Transcriptomics
Data [1.408706290287121]
We propose a study incorporating various graph neural network approaches to validate the efficacy of applying neural operators towards prediction of brain regions in mouse brain tissue samples.
We were able to achieve an F1 score of nearly 72% for the graph neural operator approach which outperformed all baseline and other graph network approaches.
arXiv Detail & Related papers (2023-02-01T18:32:06Z) - Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling
Model [64.29487107585665]
Graph representation learning techniques on brain functional networks can facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
Here, we propose an interpretable hierarchical signed graph representation learning model to extract graph-level representations from brain functional networks.
In order to further improve the model performance, we also propose a new strategy to augment functional brain network data for contrastive learning.
arXiv Detail & Related papers (2022-07-14T20:03:52Z) - Visualizing Deep Neural Networks with Topographic Activation Maps [1.1470070927586014]
We introduce and compare methods to obtain a topographic layout of neurons in a Deep Neural Network layer.
We demonstrate how to use topographic activation maps to identify errors or encoded biases and to visualize training processes.
arXiv Detail & Related papers (2022-04-07T15:56:44Z) - PCACE: A Statistical Approach to Ranking Neurons for CNN
Interpretability [1.0742675209112622]
We present a new statistical method for ranking the hidden neurons in any convolutional layer of a network.
We show a real-world application of our method to air pollution prediction with street-level images.
arXiv Detail & Related papers (2021-12-31T17:54:57Z) - Automated airway segmentation by learning graphical structure [0.76146285961466]
We put forward an advanced method for airway segmentation based on the existent convolutional neural network (CNN) and graph neural network (GNN)
The proposed model shows that compared with the CNN-only method, the combination of CNN and GNN has a better performance in that the bronchi in the chest CT scans can be detected in most cases.
arXiv Detail & Related papers (2021-09-30T01:37:31Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Graph Structure of Neural Networks [104.33754950606298]
We show how the graph structure of neural networks affect their predictive performance.
A "sweet spot" of relational graphs leads to neural networks with significantly improved predictive performance.
Top-performing neural networks have graph structure surprisingly similar to those of real biological neural networks.
arXiv Detail & Related papers (2020-07-13T17:59:31Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z) - Understanding Graph Isomorphism Network for rs-fMRI Functional
Connectivity Analysis [49.05541693243502]
We develop a framework for analyzing fMRI data using the Graph Isomorphism Network (GIN)
One of the important contributions of this paper is the observation that the GIN is a dual representation of convolutional neural network (CNN) in the graph space.
We exploit CNN-based saliency map techniques for the GNN, which we tailor to the proposed GIN with one-hot encoding.
arXiv Detail & Related papers (2020-01-10T23:40:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.