Directed Semi-Simplicial Learning with Applications to Brain Activity Decoding
- URL: http://arxiv.org/abs/2505.17939v2
- Date: Tue, 27 May 2025 11:14:31 GMT
- Title: Directed Semi-Simplicial Learning with Applications to Brain Activity Decoding
- Authors: Manuel Lecha, Andrea Cavallo, Francesca Dominici, Ran Levi, Alessio Del Bue, Elvin Isufi, Pietro Morerio, Claudio Battiloro,
- Abstract summary: Topological Deep Learning (TDL) addresses this limitation by leveraging topological spaces.<n>We introduce Semi-Simplicial Neural Networks (SSNs), a principled class of TDL models that operate on semi-simplicial sets.<n>SSNs achieve state-of-the-art performance on brain dynamics classification tasks, outperforming the second-best model by up to 27%, and message passing GNNs by up to 50% in accuracy.
- Score: 25.9555092968451
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Graph Neural Networks (GNNs) excel at learning from pairwise interactions but often overlook multi-way and hierarchical relationships. Topological Deep Learning (TDL) addresses this limitation by leveraging combinatorial topological spaces. However, existing TDL models are restricted to undirected settings and fail to capture the higher-order directed patterns prevalent in many complex systems, e.g., brain networks, where such interactions are both abundant and functionally significant. To fill this gap, we introduce Semi-Simplicial Neural Networks (SSNs), a principled class of TDL models that operate on semi-simplicial sets -- combinatorial structures that encode directed higher-order motifs and their directional relationships. To enhance scalability, we propose Routing-SSNs, which dynamically select the most informative relations in a learnable manner. We prove that SSNs are strictly more expressive than standard graph and TDL models. We then introduce a new principled framework for brain dynamics representation learning, grounded in the ability of SSNs to provably recover topological descriptors shown to successfully characterize brain activity. Empirically, SSNs achieve state-of-the-art performance on brain dynamics classification tasks, outperforming the second-best model by up to 27%, and message passing GNNs by up to 50% in accuracy. Our results highlight the potential of principled topological models for learning from structured brain data, establishing a unique real-world case study for TDL. We also test SSNs on standard node classification and edge regression tasks, showing competitive performance. We will make the code and data publicly available.
Related papers
- A Neuro-Symbolic Approach for Probabilistic Reasoning on Graph Data [12.45190689569629]
Graph networks (GNNs) often lack the ability to incorporate symbolic domain knowledge and perform general reasoning.<n>This paper presents a neuro-symbolic framework that seamlessly integrates GNNs into RBNs, combining the learning strength of GNNs with the flexible reasoning capabilities of RBNs.<n>This work introduces a powerful and coherent neuro-symbolic approach to graph data, introducing learning and reasoning in ways that enable novel applications and improved performance across diverse tasks.
arXiv Detail & Related papers (2025-07-29T14:43:25Z) - Rel-HNN: Split Parallel Hypergraph Neural Network for Learning on Relational Databases [3.6423651166048874]
Flattening the database poses challenges for deep learning models.<n>We propose a novel hypergraph-based framework, that we call rel-HNN.<n>We show that rel-HNN significantly outperforms existing methods in both classification and regression tasks.
arXiv Detail & Related papers (2025-07-16T18:20:45Z) - TopoTune : A Framework for Generalized Combinatorial Complex Neural Networks [5.966445718346143]
Generalized CCNNs can be used to transform any (graph) neural network into its TDL counterpart.<n>TopoTune is a lightweight software for defining, building, and training GCCNs with unprecedented flexibility and ease.
arXiv Detail & Related papers (2024-10-09T04:07:20Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Contextual Classification Using Self-Supervised Auxiliary Models for
Deep Neural Networks [6.585049648605185]
We introduce the notion of Self-Supervised Autogenous Learning (SSAL) models.
A SSAL objective is realized through one or more additional targets that are derived from the original supervised classification task.
We show that SSAL models consistently outperform the state-of-the-art while also providing structured predictions that are more interpretable.
arXiv Detail & Related papers (2021-01-07T18:41:16Z) - A journey in ESN and LSTM visualisations on a language task [77.34726150561087]
We trained ESNs and LSTMs on a Cross-Situationnal Learning (CSL) task.
The results are of three kinds: performance comparison, internal dynamics analyses and visualization of latent space.
arXiv Detail & Related papers (2020-12-03T08:32:01Z) - Evolutionary Architecture Search for Graph Neural Networks [23.691915813153496]
We propose a novel AutoML framework through the evolution of individual models in a large Graph Neural Networks (GNN) architecture space.
To the best of our knowledge, this is the first work to introduce and evaluate evolutionary architecture search for GNN models.
arXiv Detail & Related papers (2020-09-21T22:11:53Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z) - Exploiting Heterogeneity in Operational Neural Networks by Synaptic
Plasticity [87.32169414230822]
Recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs)
In this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the Synaptic Plasticity paradigm that poses the essential learning theory in biological neurons.
Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs.
arXiv Detail & Related papers (2020-08-21T19:03:23Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.