Simplicial Attention Networks
- URL: http://arxiv.org/abs/2204.09455v1
- Date: Wed, 20 Apr 2022 13:41:50 GMT
- Title: Simplicial Attention Networks
- Authors: Christopher Wei Jin Goh, Cristian Bodnar, Pietro Li\`o
- Abstract summary: Simplicial Neural Networks (SNNs) naturally model interactions by performing message passing on simplicial complexes.
We propose Simplicial Attention Networks (SAT), a new type of simplicial network that dynamically weighs the interactions between neighbouring simplicies.
We demonstrate that SAT outperforms existent convolutional SNNs and GNNs in two image and trajectory classification tasks.
- Score: 4.401427499962144
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph representation learning methods have mostly been limited to the
modelling of node-wise interactions. Recently, there has been an increased
interest in understanding how higher-order structures can be utilised to
further enhance the learning abilities of graph neural networks (GNNs) in
combinatorial spaces. Simplicial Neural Networks (SNNs) naturally model these
interactions by performing message passing on simplicial complexes,
higher-dimensional generalisations of graphs. Nonetheless, the computations
performed by most existent SNNs are strictly tied to the combinatorial
structure of the complex. Leveraging the success of attention mechanisms in
structured domains, we propose Simplicial Attention Networks (SAT), a new type
of simplicial network that dynamically weighs the interactions between
neighbouring simplicies and can readily adapt to novel structures.
Additionally, we propose a signed attention mechanism that makes SAT
orientation equivariant, a desirable property for models operating on (co)chain
complexes. We demonstrate that SAT outperforms existent convolutional SNNs and
GNNs in two image and trajectory classification tasks.
Related papers
- TopoTune : A Framework for Generalized Combinatorial Complex Neural Networks [5.966445718346143]
Generalized CCNNs can be used to transform any (graph) neural network into its Topological Deep Learning counterpart.
In an effort to accelerate and democratize TDL, we introduce TopoTune, a lightweight software that allows practitioners to define, build, and train GCCNs with unprecedented flexibility and ease.
arXiv Detail & Related papers (2024-10-09T04:07:20Z) - Higher-Order Topological Directionality and Directed Simplicial Neural Networks [12.617840099457066]
We introduce a novel notion of higher-order directionality and we design Directed Simplicial Neural Networks (Dir-SNNs) based on it.
Dir-SNNs are message-passing networks operating on directed simplicial complexes.
Experiments on a synthetic source localization task demonstrate that Dir-SNNs outperform undirected SNNs when the underlying complex is directed.
arXiv Detail & Related papers (2024-09-12T20:37:14Z) - DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - Privacy-Preserving Representation Learning for Text-Attributed Networks
with Simplicial Complexes [24.82096971322501]
I will study learning network representations with text attributes for simplicial complexes (RT4SC) via simplicial neural networks (SNNs)
I will conduct research on two potential attacks on the representation outputs from SNNs.
I will study a privacy-preserving deterministic differentially private alternating direction method of multiplier to learn secure representation outputs from SNNs.
arXiv Detail & Related papers (2023-02-09T00:32:06Z) - Simple and Efficient Heterogeneous Graph Neural Network [55.56564522532328]
Heterogeneous graph neural networks (HGNNs) have powerful capability to embed rich structural and semantic information of a heterogeneous graph into node representations.
Existing HGNNs inherit many mechanisms from graph neural networks (GNNs) over homogeneous graphs, especially the attention mechanism and the multi-layer structure.
This paper conducts an in-depth and detailed study of these mechanisms and proposes Simple and Efficient Heterogeneous Graph Neural Network (SeHGNN)
arXiv Detail & Related papers (2022-07-06T10:01:46Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Universal approximation property of invertible neural networks [76.95927093274392]
Invertible neural networks (INNs) are neural network architectures with invertibility by design.
Thanks to their invertibility and the tractability of Jacobian, INNs have various machine learning applications such as probabilistic modeling, generative modeling, and representation learning.
arXiv Detail & Related papers (2022-04-15T10:45:26Z) - BScNets: Block Simplicial Complex Neural Networks [79.81654213581977]
Simplicial neural networks (SNN) have recently emerged as the newest direction in graph learning.
We present Block Simplicial Complex Neural Networks (BScNets) model for link prediction.
BScNets outperforms state-of-the-art models by a significant margin while maintaining low costs.
arXiv Detail & Related papers (2021-12-13T17:35:54Z) - Simplicial Neural Networks [0.0]
We present simplicial neural networks (SNNs)
SNNs are a generalization of graph neural networks to data that live on a class of topological spaces called simplicial complexes.
We test the SNNs on the task of imputing missing data on coauthorship complexes.
arXiv Detail & Related papers (2020-10-07T20:15:01Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.