Learning Interpretable Models for Coupled Networks Under Domain
Constraints
- URL: http://arxiv.org/abs/2104.09069v1
- Date: Mon, 19 Apr 2021 06:23:31 GMT
- Title: Learning Interpretable Models for Coupled Networks Under Domain
Constraints
- Authors: Hongyuan You, Sikun Lin, Ambuj K. Singh
- Abstract summary: We investigate the idea of coupled networks by focusing on interactions between structural edges and functional edges of brain networks.
We propose a novel formulation to place hard network constraints on the noise term while estimating interactions.
We validate our method on multishell diffusion and task-evoked fMRI datasets from the Human Connectome Project.
- Score: 8.308385006727702
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling the behavior of coupled networks is challenging due to their
intricate dynamics. For example in neuroscience, it is of critical importance
to understand the relationship between the functional neural processes and
anatomical connectivities. Modern neuroimaging techniques allow us to
separately measure functional connectivity through fMRI imaging and the
underlying white matter wiring through diffusion imaging. Previous studies have
shown that structural edges in brain networks improve the inference of
functional edges and vice versa. In this paper, we investigate the idea of
coupled networks through an optimization framework by focusing on interactions
between structural edges and functional edges of brain networks. We consider
both types of edges as observed instances of random variables that represent
different underlying network processes. The proposed framework does not depend
on Gaussian assumptions and achieves a more robust performance on general data
compared with existing approaches. To incorporate existing domain knowledge
into such studies, we propose a novel formulation to place hard network
constraints on the noise term while estimating interactions. This not only
leads to a cleaner way of applying network constraints but also provides a more
scalable solution when network connectivity is sparse. We validate our method
on multishell diffusion and task-evoked fMRI datasets from the Human Connectome
Project, leading to both important insights on structural backbones that
support various types of task activities as well as general solutions to the
study of coupled networks.
Related papers
- Identifying Sub-networks in Neural Networks via Functionally Similar Representations [41.028797971427124]
We take a step toward automating the understanding of the network by investigating the existence of distinct sub-networks.
Our approach offers meaningful insights into the behavior of neural networks with minimal human and computational cost.
arXiv Detail & Related papers (2024-10-21T20:19:00Z) - Joint Graph Convolution for Analyzing Brain Structural and Functional
Connectome [11.016035878136034]
We propose to couple the two networks of an individual by adding inter-network edges between corresponding brain regions.
The weights of inter-network edges are learnable, reflecting non-uniform structure-function coupling strength across the brain.
We apply our Joint-GCN to predict age and sex of 662 participants from the public dataset of the National Consortium on Alcohol and Neurodevelopment in Adolescence.
arXiv Detail & Related papers (2022-10-27T23:43:34Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - Quasi-orthogonality and intrinsic dimensions as measures of learning and
generalisation [55.80128181112308]
We show that dimensionality and quasi-orthogonality of neural networks' feature space may jointly serve as network's performance discriminants.
Our findings suggest important relationships between the networks' final performance and properties of their randomly initialised feature spaces.
arXiv Detail & Related papers (2022-03-30T21:47:32Z) - Towards Interaction Detection Using Topological Analysis on Neural
Networks [55.74562391439507]
In neural networks, any interacting features must follow a strongly weighted connection to common hidden units.
We propose a new measure for quantifying interaction strength, based upon the well-received theory of persistent homology.
A Persistence Interaction detection(PID) algorithm is developed to efficiently detect interactions.
arXiv Detail & Related papers (2020-10-25T02:15:24Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z) - Training spiking neural networks using reinforcement learning [0.0]
We propose biologically-plausible alternatives to backpropagation to facilitate the training of spiking neural networks.
We focus on investigating the candidacy of reinforcement learning rules in solving the spatial and temporal credit assignment problems.
We compare and contrast the two approaches by applying them to traditional RL domains such as gridworld, cartpole and mountain car.
arXiv Detail & Related papers (2020-05-12T17:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.