CommsVAE: Learning the brain's macroscale communication dynamics using
coupled sequential VAEs
- URL: http://arxiv.org/abs/2210.03667v1
- Date: Fri, 7 Oct 2022 16:20:19 GMT
- Title: CommsVAE: Learning the brain's macroscale communication dynamics using
coupled sequential VAEs
- Authors: Eloy Geenjaar, Noah Lewis, Amrit Kashyap, Robyn Miller, Vince Calhoun
- Abstract summary: We propose a non-linear generative approach to communication from functional data.
We show that our approach models communication that is more specific to each task.
The specificity of our method means it can have an impact on the understanding of psychiatric disorders.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Communication within or between complex systems is commonplace in the natural
sciences and fields such as graph neural networks. The brain is a perfect
example of such a complex system, where communication between brain regions is
constantly being orchestrated. To analyze communication, the brain is often
split up into anatomical regions that each perform certain computations. These
regions must interact and communicate with each other to perform tasks and
support higher-level cognition. On a macroscale, these regions communicate
through signal propagation along the cortex and along white matter tracts over
longer distances. When and what types of signals are communicated over time is
an unsolved problem and is often studied using either functional or structural
data. In this paper, we propose a non-linear generative approach to
communication from functional data. We address three issues with common
connectivity approaches by explicitly modeling the directionality of
communication, finding communication at each timestep, and encouraging
sparsity. To evaluate our model, we simulate temporal data that has sparse
communication between nodes embedded in it and show that our model can uncover
the expected communication dynamics. Subsequently, we apply our model to
temporal neural data from multiple tasks and show that our approach models
communication that is more specific to each task. The specificity of our method
means it can have an impact on the understanding of psychiatric disorders,
which are believed to be related to highly specific communication between brain
regions compared to controls. In sum, we propose a general model for dynamic
communication learning on graphs, and show its applicability to a subfield of
the natural sciences, with potential widespread scientific impact.
Related papers
- SynapsNet: Enhancing Neuronal Population Dynamics Modeling via Learning Functional Connectivity [0.0]
We introduce SynapsNet, a novel deep-learning framework that effectively models population dynamics and functional interactions between neurons.
A shared decoder uses the input current, previous neuronal activity, neuron embedding, and behavioral data to predict the population activity in the next time step.
Our experiments, conducted on mouse cortical activity from publicly available datasets, demonstrate that SynapsNet consistently outperforms existing models in forecasting population activity.
arXiv Detail & Related papers (2024-11-12T22:25:15Z) - Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Conversation Understanding using Relational Temporal Graph Neural
Networks with Auxiliary Cross-Modality Interaction [2.1261712640167856]
Emotion recognition is a crucial task for human conversation understanding.
We propose an input Temporal Graph Neural Network with Cross-Modality Interaction (CORECT)
CORECT effectively captures conversation-level cross-modality interactions and utterance-level temporal dependencies.
arXiv Detail & Related papers (2023-11-08T07:46:25Z) - Astrocytes as a mechanism for meta-plasticity and contextually-guided
network function [2.66269503676104]
Astrocytes are a ubiquitous and enigmatic type of non-neuronal cell.
Astrocytes may play a more direct and active role in brain function and neural computation.
arXiv Detail & Related papers (2023-11-06T20:31:01Z) - MBrain: A Multi-channel Self-Supervised Learning Framework for Brain
Signals [7.682832730967219]
We study the self-supervised learning framework for brain signals that can be applied to pre-train either SEEG or EEG data.
Inspired by this, we propose MBrain to learn implicit spatial and temporal correlations between different channels.
Our model outperforms several state-of-the-art time series SSL and unsupervised models, and has the ability to be deployed to clinical practice.
arXiv Detail & Related papers (2023-06-15T09:14:26Z) - Language Knowledge-Assisted Representation Learning for Skeleton-Based
Action Recognition [71.35205097460124]
How humans understand and recognize the actions of others is a complex neuroscientific problem.
LA-GCN proposes a graph convolution network using large-scale language models (LLM) knowledge assistance.
arXiv Detail & Related papers (2023-05-21T08:29:16Z) - DBGDGM: Dynamic Brain Graph Deep Generative Model [63.23390833353625]
Graphs are a natural representation of brain activity derived from functional magnetic imaging (fMRI) data.
It is well known that clusters of anatomical brain regions, known as functional connectivity networks (FCNs), encode temporal relationships which can serve as useful biomarkers for understanding brain function and dysfunction.
Previous works, however, ignore the temporal dynamics of the brain and focus on static graphs.
We propose a dynamic brain graph deep generative model (DBGDGM) which simultaneously clusters brain regions into temporally evolving communities and learns dynamic unsupervised node embeddings.
arXiv Detail & Related papers (2023-01-26T20:45:30Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - Towards Human-Agent Communication via the Information Bottleneck
Principle [19.121541894577298]
We study how trading off these three factors -- utility, informativeness, and complexity -- shapes emergent communication.
We propose Vector-Quantized Variational Information Bottleneck (VQ-VIB), a method for training neural agents to compress inputs into discrete signals embedded in a continuous space.
arXiv Detail & Related papers (2022-06-30T20:10:20Z) - Model-based analysis of brain activity reveals the hierarchy of language
in 305 subjects [82.81964713263483]
A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli.
Here, we show that a model-based approach can reach equivalent results within subjects exposed to natural stimuli.
arXiv Detail & Related papers (2021-10-12T15:30:21Z) - Can the brain use waves to solve planning problems? [62.997667081978825]
We present a neural network model which can solve such tasks.
The model is compatible with a broad range of empirical findings about the mammalian neocortex and hippocampus.
arXiv Detail & Related papers (2021-10-11T11:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.