DEMAND: Deep Matrix Approximately NonlinearDecomposition to Identify
Meta, Canonical, and Sub-Spatial Pattern of functional Magnetic Resonance
Imaging in the Human Brain
- URL: http://arxiv.org/abs/2205.10264v1
- Date: Fri, 20 May 2022 15:55:01 GMT
- Title: DEMAND: Deep Matrix Approximately NonlinearDecomposition to Identify
Meta, Canonical, and Sub-Spatial Pattern of functional Magnetic Resonance
Imaging in the Human Brain
- Authors: Wei Zhang, Yu Bao
- Abstract summary: We propose a novel deep nonlinear matrix factorization named Deep Approximately Decomposition (DEMAND) in this work to take advantage of the shallow linear model, e.g., Sparse Dictionary Learning (SDL) and Deep Neural Networks (DNNs)
DEMAND can reveal the reproducible meta, canonical, and sub-spatial features of the human brain more efficiently than other peer methodologies.
- Score: 8.93274096260726
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Neural Networks (DNNs) have already become a crucial computational
approach to revealing the spatial patterns in the human brain; however, there
are three major shortcomings in utilizing DNNs to detect the spatial patterns
in functional Magnetic Resonance Signals: 1). It is a fully connected
architecture that increases the complexity of network structures that is
difficult to optimize and vulnerable to overfitting; 2). The requirement of
large training samples results in erasing the individual/minor patterns in
feature extraction; 3). The hyperparameters are required to be tuned manually,
which is time-consuming. Therefore, we propose a novel deep nonlinear matrix
factorization named Deep Matrix Approximately Nonlinear Decomposition (DEMAND)
in this work to take advantage of the shallow linear model, e.g., Sparse
Dictionary Learning (SDL) and DNNs. At first, the proposed DEMAND employs a
non-fully connected and multilayer-stacked architecture that is easier to be
optimized compared with canonical DNNs; furthermore, due to the efficient
architecture, training DEMAND can avoid overfitting and enables the recognition
of individual/minor features based on a small dataset such as an individual
data; finally, a novel rank estimator technique is introduced to tune all
hyperparameters of DEMAND automatically. Moreover, the proposed DEMAND is
validated by four other peer methodologies via real functional Magnetic
Resonance Imaging data in the human brain. In short, the validation results
demonstrate that DEMAND can reveal the reproducible meta, canonical, and
sub-spatial features of the human brain more efficiently than other peer
methodologies.
Related papers
- PINQI: An End-to-End Physics-Informed Approach to Learned Quantitative MRI Reconstruction [0.7199733380797579]
Quantitative Magnetic Resonance Imaging (qMRI) enables the reproducible measurement of biophysical parameters in tissue.
The challenge lies in solving a nonlinear, ill-posed inverse problem to obtain desired tissue parameter maps from acquired raw data.
We propose PINQI, a novel qMRI reconstruction method that integrates the knowledge about the signal, acquisition model, and learned regularization into a single end-to-end trainable neural network.
arXiv Detail & Related papers (2023-06-19T15:37:53Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - SENDER: SEmi-Nonlinear Deep Efficient Reconstructor for Extraction
Canonical, Meta, and Sub Functional Connectivity in the Human Brain [8.93274096260726]
We propose a novel deep hybrid learning method named SEmi-linear Deep Efficient Reconstruction (SENDER) to overcome the aforementioned shortcomings.
SENDER incorporates a non-fully connected architecture conducted for the nonlinear learning methods to reveal the meta-functional connectivity through shallow and deeper layers.
To further validate the effectiveness, we compared SENDER with four peer methodologies using real Magnetic Resonance Imaging data for the human brain.
arXiv Detail & Related papers (2022-09-12T21:36:44Z) - DELMAR: Deep Linear Matrix Approximately Reconstruction to Extract
Hierarchical Functional Connectivity in the Human Brain [8.93274096260726]
We propose a novel deep matrix factorization technique called Deep Linear Matrix Approximate Reconstruction (DELMAR) to bridge the gaps in current methods.
The validation experiments of three peer methods and DELMAR using real functional MRI signal of the human brain demonstrates that our proposed method can efficiently identify the spatial feature in fMRI signal even faster and more accurately than other peer methods.
arXiv Detail & Related papers (2022-05-20T17:52:50Z) - A novel Deep Neural Network architecture for non-linear system
identification [78.69776924618505]
We present a novel Deep Neural Network (DNN) architecture for non-linear system identification.
Inspired by fading memory systems, we introduce inductive bias (on the architecture) and regularization (on the loss function)
This architecture allows for automatic complexity selection based solely on available data.
arXiv Detail & Related papers (2021-06-06T10:06:07Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - 1-Dimensional polynomial neural networks for audio signal related
problems [3.867363075280544]
We show that the proposed model can extract more relevant information from the data than a 1DCNN in less time and with less memory.
We show that this non-linearity enables the model to yield better results with less computational and spatial complexity than a regular 1DCNN on various classification and regression problems related to audio signals.
arXiv Detail & Related papers (2020-09-09T02:29:53Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.