Toward more frugal models for functional cerebral networks automatic
recognition with resting-state fMRI
- URL: http://arxiv.org/abs/2307.01953v1
- Date: Tue, 4 Jul 2023 23:06:57 GMT
- Title: Toward more frugal models for functional cerebral networks automatic
recognition with resting-state fMRI
- Authors: Lukman Ismaila, Pejman Rasti, Jean-Michel Lem\'ee, David Rousseau
- Abstract summary: We are investigating different encoding techniques in the form of supervoxels, then graphs to reduce the complexity of the model while tracking the loss of performance.
This approach is illustrated on a recognition task of resting-state functional networks for patients with brain tumors.
- Score: 3.0614165499580777
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We refer to a machine learning situation where models based on classical
convolutional neural networks have shown good performance. We are investigating
different encoding techniques in the form of supervoxels, then graphs to reduce
the complexity of the model while tracking the loss of performance. This
approach is illustrated on a recognition task of resting-state functional
networks for patients with brain tumors. Graphs encoding supervoxels preserve
activation characteristics of functional brain networks from images, optimize
model parameters by 26 times while maintaining CNN model performance.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Convolutional Neural Generative Coding: Scaling Predictive Coding to
Natural Images [79.07468367923619]
We develop convolutional neural generative coding (Conv-NGC)
We implement a flexible neurobiologically-motivated algorithm that progressively refines latent state maps.
We study the effectiveness of our brain-inspired neural system on the tasks of reconstruction and image denoising.
arXiv Detail & Related papers (2022-11-22T06:42:41Z) - Low-Light Image Restoration Based on Retina Model using Neural Networks [0.0]
The proposed neural network model saves the cost of computational overhead in contrast with traditional signal-processing models, and generates results comparable with complicated deep learning models from the subjective perspective.
This work shows that to directly simulate the functionalities of retinal neurons using neural networks not only avoids the manually seeking for the optimal parameters, but also paves the way to build corresponding artificial versions for certain neurobiological organizations.
arXiv Detail & Related papers (2022-10-04T08:14:49Z) - Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling
Model [64.29487107585665]
Graph representation learning techniques on brain functional networks can facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
Here, we propose an interpretable hierarchical signed graph representation learning model to extract graph-level representations from brain functional networks.
In order to further improve the model performance, we also propose a new strategy to augment functional brain network data for contrastive learning.
arXiv Detail & Related papers (2022-07-14T20:03:52Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Training Deep Spiking Auto-encoders without Bursting or Dying Neurons
through Regularization [9.34612743192798]
Spiking neural networks are a promising approach towards next-generation models of the brain in computational neuroscience.
We apply end-to-end learning with membrane potential-based backpropagation to a spiking convolutional auto-encoder.
We show that applying regularization on membrane potential and spiking output successfully avoids both dead and bursting neurons.
arXiv Detail & Related papers (2021-09-22T21:27:40Z) - Modeling the Nonsmoothness of Modern Neural Networks [35.93486244163653]
We quantify the nonsmoothness using a feature named the sum of the magnitude of peaks (SMP)
We envision that the nonsmoothness feature can potentially be used as a forensic tool for regression-based applications of neural networks.
arXiv Detail & Related papers (2021-03-26T20:55:19Z) - The FaceChannel: A Fast & Furious Deep Neural Network for Facial
Expression Recognition [71.24825724518847]
Current state-of-the-art models for automatic Facial Expression Recognition (FER) are based on very deep neural networks that are effective but rather expensive to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how our model achieves a comparable, if not better, performance to the current state-of-the-art in FER.
arXiv Detail & Related papers (2020-09-15T09:25:37Z) - Learning Shape Features and Abstractions in 3D Convolutional Neural
Networks for Detecting Alzheimer's Disease [0.0]
In this thesis, learned shape features and abstractions by 3D ConvNets for detecting Alzheimer's disease were investigated.
LRP relevance map of different models revealed which parts of the brain were more relevant for the classification decision.
Finally, transfer learning from a convolutional autoencoder was implemented to check whether increasing the number of training samples with patches of input improves learned features and the model performance.
arXiv Detail & Related papers (2020-09-10T17:41:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.