SENDER: SEmi-Nonlinear Deep Efficient Reconstructor for Extraction
Canonical, Meta, and Sub Functional Connectivity in the Human Brain
- URL: http://arxiv.org/abs/2209.05627v1
- Date: Mon, 12 Sep 2022 21:36:44 GMT
- Title: SENDER: SEmi-Nonlinear Deep Efficient Reconstructor for Extraction
Canonical, Meta, and Sub Functional Connectivity in the Human Brain
- Authors: Wei Zhang, Yu Bao
- Abstract summary: We propose a novel deep hybrid learning method named SEmi-linear Deep Efficient Reconstruction (SENDER) to overcome the aforementioned shortcomings.
SENDER incorporates a non-fully connected architecture conducted for the nonlinear learning methods to reveal the meta-functional connectivity through shallow and deeper layers.
To further validate the effectiveness, we compared SENDER with four peer methodologies using real Magnetic Resonance Imaging data for the human brain.
- Score: 8.93274096260726
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep Linear and Nonlinear learning methods have already been vital machine
learning methods for investigating the hierarchical features such as functional
connectivity in the human brain via functional Magnetic Resonance signals;
however, there are three major shortcomings: 1). For deep linear learning
methods, although the identified hierarchy of functional connectivity is easily
explainable, it is challenging to reveal more hierarchical functional
connectivity; 2). For deep nonlinear learning methods, although non-fully
connected architecture reduces the complexity of neural network structures that
are easy to optimize and not vulnerable to overfitting, the functional
connectivity hierarchy is difficult to explain; 3). Importantly, it is
challenging for Deep Linear/Nonlinear methods to detect meta and sub-functional
connectivity even in the shallow layers; 4). Like most conventional Deep
Nonlinear Methods, such as Deep Neural Networks, the hyperparameters must be
tuned manually, which is time-consuming. Thus, in this work, we propose a novel
deep hybrid learning method named SEmi-Nonlinear Deep Efficient Reconstruction
(SENDER), to overcome the aforementioned shortcomings: 1). SENDER utilizes a
multiple-layer stacked structure for the linear learning methods to detect the
canonical functional connectivity; 2). SENDER implements a non-fully connected
architecture conducted for the nonlinear learning methods to reveal the
meta-functional connectivity through shallow and deeper layers; 3). SENDER
incorporates the proposed background components to extract the sub-functional
connectivity; 4). SENDER adopts a novel rank reduction operator to implement
the hyperparameters tuning automatically. To further validate the
effectiveness, we compared SENDER with four peer methodologies using real
functional Magnetic Resonance Imaging data for the human brain.
Related papers
- Repetita Iuvant: Data Repetition Allows SGD to Learn High-Dimensional Multi-Index Functions [20.036783417617652]
We investigate the training dynamics of two-layer shallow neural networks trained with gradient-based algorithms.
We show that a simple modification of the idealized single-pass gradient descent training scenario drastically improves its computational efficiency.
Our results highlight the ability of networks to learn relevant structures from data alone without any pre-processing.
arXiv Detail & Related papers (2024-05-24T11:34:31Z) - Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural
Networks [49.808194368781095]
We show that three-layer neural networks have provably richer feature learning capabilities than two-layer networks.
This work makes progress towards understanding the provable benefit of three-layer neural networks over two-layer networks in the feature learning regime.
arXiv Detail & Related papers (2023-05-11T17:19:30Z) - Deep learning applied to computational mechanics: A comprehensive
review, state of the art, and the classics [77.34726150561087]
Recent developments in artificial neural networks, particularly deep learning (DL), are reviewed in detail.
Both hybrid and pure machine learning (ML) methods are discussed.
History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics.
arXiv Detail & Related papers (2022-12-18T02:03:00Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - DEMAND: Deep Matrix Approximately NonlinearDecomposition to Identify
Meta, Canonical, and Sub-Spatial Pattern of functional Magnetic Resonance
Imaging in the Human Brain [8.93274096260726]
We propose a novel deep nonlinear matrix factorization named Deep Approximately Decomposition (DEMAND) in this work to take advantage of the shallow linear model, e.g., Sparse Dictionary Learning (SDL) and Deep Neural Networks (DNNs)
DEMAND can reveal the reproducible meta, canonical, and sub-spatial features of the human brain more efficiently than other peer methodologies.
arXiv Detail & Related papers (2022-05-20T15:55:01Z) - The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networks [19.899987851661354]
We study SGD-learnability with $O(d)$ sample complexity in a large ambient dimension.
Our main results characterize a hierarchical property, the "merged-staircase property", that is both necessary and nearly sufficient for learning in this setting.
Key tools are a new "dimension-free" dynamics approximation that applies to functions defined on a latent low-dimensional subspace.
arXiv Detail & Related papers (2022-02-17T13:43:06Z) - Optimization-Based Separations for Neural Networks [57.875347246373956]
We show that gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations.
This is the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice.
arXiv Detail & Related papers (2021-12-04T18:07:47Z) - Localized Persistent Homologies for more Effective Deep Learning [60.78456721890412]
We introduce an approach that relies on a new filtration function to account for location during network training.
We demonstrate experimentally on 2D images of roads and 3D image stacks of neuronal processes that networks trained in this manner are better at recovering the topology of the curvilinear structures they extract.
arXiv Detail & Related papers (2021-10-12T19:28:39Z) - Physics-Based Deep Learning for Fiber-Optic Communication Systems [10.630021520220653]
We propose a new machine-learning approach for fiber-optic communication systems governed by the nonlinear Schr"odinger equation (NLSE)
Our main observation is that the popular split-step method (SSM) for numerically solving the NLSE has essentially the same functional form as a deep multi-layer neural network.
We exploit this connection by parameterizing the SSM and viewing the linear steps as general linear functions, similar to the weight matrices in a neural network.
arXiv Detail & Related papers (2020-10-27T12:55:23Z) - LOCUS: A Novel Decomposition Method for Brain Network Connectivity
Matrices using Low-rank Structure with Uniform Sparsity [8.105772140598056]
Network-oriented research has been increasingly popular in many scientific areas.
In neuroscience research, imaging-based network connectivity measures have become the key for brain organizations.
arXiv Detail & Related papers (2020-08-19T05:47:12Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.