Attending to Topological Spaces: The Cellular Transformer
- URL: http://arxiv.org/abs/2405.14094v2
- Date: Sun, 26 May 2024 23:29:11 GMT
- Title: Attending to Topological Spaces: The Cellular Transformer
- Authors: Rubén Ballester, Pablo Hernández-García, Mathilde Papillon, Claudio Battiloro, Nina Miolane, Tolga Birdal, Carles Casacuberta, Sergio Escalera, Mustafa Hajij,
- Abstract summary: Topological Deep Learning seeks to enhance the predictive performance of neural network models by harnessing topological structures in input data.
We introduce the Cellular Transformer (CT), a novel architecture that generalizes graph-based transformers to cell complexes.
CT achieves state-of-the-art performance, but it does so without the need for more complex enhancements.
- Score: 37.84207797241944
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Topological Deep Learning seeks to enhance the predictive performance of neural network models by harnessing topological structures in input data. Topological neural networks operate on spaces such as cell complexes and hypergraphs, that can be seen as generalizations of graphs. In this work, we introduce the Cellular Transformer (CT), a novel architecture that generalizes graph-based transformers to cell complexes. First, we propose a new formulation of the usual self- and cross-attention mechanisms, tailored to leverage incidence relations in cell complexes, e.g., edge-face and node-edge relations. Additionally, we propose a set of topological positional encodings specifically designed for cell complexes. By transforming three graph datasets into cell complex datasets, our experiments reveal that CT not only achieves state-of-the-art performance, but it does so without the need for more complex enhancements such as virtual nodes, in-domain structural encodings, or graph rewiring.
Related papers
- Higher-Order Message Passing for Glycan Representation Learning [0.0]
Graph Networks (GNNs) are deep learning models designed to process and analyze graph-structured data.
This work presents a new model architecture based on complexes and higher-order message passing to extract features from glycan structures into latent space representation.
We envision that these improvements will spur further advances in computational glycosciences and reveal the roles of glycans in biology.
arXiv Detail & Related papers (2024-09-20T12:55:43Z) - Cell Graph Transformer for Nuclei Classification [78.47566396839628]
We develop a cell graph transformer (CGT) that treats nodes and edges as input tokens to enable learnable adjacency and information exchange among all nodes.
Poorly features can lead to noisy self-attention scores and inferior convergence.
We propose a novel topology-aware pretraining method that leverages a graph convolutional network (GCN) to learn a feature extractor.
arXiv Detail & Related papers (2024-02-20T12:01:30Z) - Curve Your Attention: Mixed-Curvature Transformers for Graph
Representation Learning [77.1421343649344]
We propose a generalization of Transformers towards operating entirely on the product of constant curvature spaces.
We also provide a kernelized approach to non-Euclidean attention, which enables our model to run in time and memory cost linear to the number of nodes and edges.
arXiv Detail & Related papers (2023-09-08T02:44:37Z) - Generalized Simplicial Attention Neural Networks [22.171364354867723]
We introduce Generalized Simplicial Attention Neural Networks (GSANs)
GSANs process data living on simplicial complexes using masked self-attentional layers.
These schemes learn how to combine data associated with neighbor simplices of consecutive order in a task-oriented fashion.
arXiv Detail & Related papers (2023-09-05T11:29:25Z) - From Latent Graph to Latent Topology Inference: Differentiable Cell
Complex Module [21.383018558790674]
Differentiable Cell Complex Module (DCM) is a novel learnable function that computes cell probabilities in the complex to improve the downstream task.
We show how to integrate DCM with cell complex message passing networks layers and train it in a end-to-end fashion.
Our model is tested on several homophilic and heterophilic graph datasets and it is shown to outperform other state-of-the-art techniques.
arXiv Detail & Related papers (2023-05-25T15:33:19Z) - Cell Attention Networks [25.72671436731666]
We introduce Cell Attention Networks (CANs), a neural architecture operating on data defined over the vertices of a graph.
CANs exploit the lower and upper neighborhoods, as encoded in the cell complex, to design two independent masked self-attention mechanisms.
The experimental results show that CAN is a low complexity strategy that compares favorably with state of the art results on graph-based learning tasks.
arXiv Detail & Related papers (2022-09-16T21:57:39Z) - Simple and Efficient Heterogeneous Graph Neural Network [55.56564522532328]
Heterogeneous graph neural networks (HGNNs) have powerful capability to embed rich structural and semantic information of a heterogeneous graph into node representations.
Existing HGNNs inherit many mechanisms from graph neural networks (GNNs) over homogeneous graphs, especially the attention mechanism and the multi-layer structure.
This paper conducts an in-depth and detailed study of these mechanisms and proposes Simple and Efficient Heterogeneous Graph Neural Network (SeHGNN)
arXiv Detail & Related papers (2022-07-06T10:01:46Z) - Self-Supervised Graph Representation Learning for Neuronal Morphologies [75.38832711445421]
We present GraphDINO, a data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled datasets.
We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings on par with manual feature-based classification by experts.
Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets.
arXiv Detail & Related papers (2021-12-23T12:17:47Z) - Dist2Cycle: A Simplicial Neural Network for Homology Localization [66.15805004725809]
Simplicial complexes can be viewed as high dimensional generalizations of graphs that explicitly encode multi-way ordered relations.
We propose a graph convolutional model for learning functions parametrized by the $k$-homological features of simplicial complexes.
arXiv Detail & Related papers (2021-10-28T14:59:41Z) - Cell Complex Neural Networks [0.8121462458089141]
We propose a general, unifying construction for performing neural network-type computations on cell complexes.
We show how our cell complex autoencoder construction can give, in the special case textbfcell2vec, a generalization for node2vec.
arXiv Detail & Related papers (2020-10-02T01:38:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.