Simplicial Neural Networks
- URL: http://arxiv.org/abs/2010.03633v2
- Date: Mon, 28 Dec 2020 18:24:35 GMT
- Title: Simplicial Neural Networks
- Authors: Stefania Ebli, Micha\"el Defferrard, Gard Spreemann
- Abstract summary: We present simplicial neural networks (SNNs)
SNNs are a generalization of graph neural networks to data that live on a class of topological spaces called simplicial complexes.
We test the SNNs on the task of imputing missing data on coauthorship complexes.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present simplicial neural networks (SNNs), a generalization of graph
neural networks to data that live on a class of topological spaces called
simplicial complexes. These are natural multi-dimensional extensions of graphs
that encode not only pairwise relationships but also higher-order interactions
between vertices - allowing us to consider richer data, including vector fields
and $n$-fold collaboration networks. We define an appropriate notion of
convolution that we leverage to construct the desired convolutional neural
networks. We test the SNNs on the task of imputing missing data on coauthorship
complexes.
Related papers
- Binarized Simplicial Convolutional Neural Networks [6.069611493148632]
We propose a novel neural network architecture on simplicial complexes named Binarized Simplicial Convolutional Neural Networks (Bi-SCNN)
Compared to the previous Simplicial Convolutional Neural Networks, the reduced model complexity of Bi-SCNN shortens the execution time without sacrificing the prediction performance.
Experiments with real-world citation and ocean-drifter data confirmed that our proposed Bi-SCNN is efficient and accurate.
arXiv Detail & Related papers (2024-05-07T08:05:20Z) - Privacy-Preserving Representation Learning for Text-Attributed Networks
with Simplicial Complexes [24.82096971322501]
I will study learning network representations with text attributes for simplicial complexes (RT4SC) via simplicial neural networks (SNNs)
I will conduct research on two potential attacks on the representation outputs from SNNs.
I will study a privacy-preserving deterministic differentially private alternating direction method of multiplier to learn secure representation outputs from SNNs.
arXiv Detail & Related papers (2023-02-09T00:32:06Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - Simple and Efficient Heterogeneous Graph Neural Network [55.56564522532328]
Heterogeneous graph neural networks (HGNNs) have powerful capability to embed rich structural and semantic information of a heterogeneous graph into node representations.
Existing HGNNs inherit many mechanisms from graph neural networks (GNNs) over homogeneous graphs, especially the attention mechanism and the multi-layer structure.
This paper conducts an in-depth and detailed study of these mechanisms and proposes Simple and Efficient Heterogeneous Graph Neural Network (SeHGNN)
arXiv Detail & Related papers (2022-07-06T10:01:46Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Simplicial Attention Networks [4.401427499962144]
Simplicial Neural Networks (SNNs) naturally model interactions by performing message passing on simplicial complexes.
We propose Simplicial Attention Networks (SAT), a new type of simplicial network that dynamically weighs the interactions between neighbouring simplicies.
We demonstrate that SAT outperforms existent convolutional SNNs and GNNs in two image and trajectory classification tasks.
arXiv Detail & Related papers (2022-04-20T13:41:50Z) - BScNets: Block Simplicial Complex Neural Networks [79.81654213581977]
Simplicial neural networks (SNN) have recently emerged as the newest direction in graph learning.
We present Block Simplicial Complex Neural Networks (BScNets) model for link prediction.
BScNets outperforms state-of-the-art models by a significant margin while maintaining low costs.
arXiv Detail & Related papers (2021-12-13T17:35:54Z) - Dist2Cycle: A Simplicial Neural Network for Homology Localization [66.15805004725809]
Simplicial complexes can be viewed as high dimensional generalizations of graphs that explicitly encode multi-way ordered relations.
We propose a graph convolutional model for learning functions parametrized by the $k$-homological features of simplicial complexes.
arXiv Detail & Related papers (2021-10-28T14:59:41Z) - Simplicial Convolutional Neural Networks [36.078200422283835]
Recently, signal processing and neural networks have been extended to process and learn from data on graphs.
We propose a simplicial convolutional neural network (SCNN) architecture to learn from data defined on simplices.
arXiv Detail & Related papers (2021-10-06T08:52:55Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.