The $μ\mathcal{G}$ Language for Programming Graph Neural Networks
- URL: http://arxiv.org/abs/2407.09441v4
- Date: Tue, 15 Oct 2024 15:14:05 GMT
- Title: The $μ\mathcal{G}$ Language for Programming Graph Neural Networks
- Authors: Matteo Belenchia, Flavio Corradini, Michela Quadrini, Michele Loreti,
- Abstract summary: $mumathcalG$ is an original domain-specific language for the specification of graph neural networks.
We show how $mumathcalG$ can be used to define some of the most popular graph neural network models.
- Score: 0.542060671850508
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks form a class of deep learning architectures specifically designed to work with graph-structured data. As such, they share the inherent limitations and problems of deep learning, especially regarding the issues of explainability and trustworthiness. We propose $\mu\mathcal{G}$, an original domain-specific language for the specification of graph neural networks that aims to overcome these issues. The language's syntax is introduced, and its meaning is rigorously defined by a denotational semantics. An equivalent characterization in the form of an operational semantics is also provided and, together with a type system, is used to prove the type soundness of $\mu\mathcal{G}$. We show how $\mu\mathcal{G}$ programs can be represented in a more user-friendly graphical visualization, and provide examples of its generality by showing how it can be used to define some of the most popular graph neural network models, or to develop any custom graph processing application.
Related papers
- CliquePH: Higher-Order Information for Graph Neural Networks through Persistent Homology on Clique Graphs [15.044471983688249]
We introduce a novel method that extracts information about higher-order structures in the graph.
Our method can lead to up to $31%$ improvements in test accuracy.
arXiv Detail & Related papers (2024-09-12T16:56:26Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Explanation Graph Generation via Pre-trained Language Models: An
Empirical Study with Contrastive Learning [84.35102534158621]
We study pre-trained language models that generate explanation graphs in an end-to-end manner.
We propose simple yet effective ways of graph perturbations via node and edge edit operations.
Our methods lead to significant improvements in both structural and semantic accuracy of explanation graphs.
arXiv Detail & Related papers (2022-04-11T00:58:27Z) - SpeqNets: Sparsity-aware Permutation-equivariant Graph Networks [16.14454388348814]
We present a class of universal, permutation-equivariant graph networks.
They offer a fine-grained control between expressivity and scalability and adapt to the sparsity of the graph.
These architectures lead to vastly reduced computation times compared to standard higher-order graph networks.
arXiv Detail & Related papers (2022-03-25T21:17:09Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - Learning through structure: towards deep neuromorphic knowledge graph
embeddings [0.5906031288935515]
We propose a strategy to map deep graph learning architectures for knowledge graph reasoning to neuromorphic architectures.
Based on the insight that randomly and untrained graph neural networks are able to preserve local graph structures, we compose a frozen neural network shallow knowledge graph embedding models.
We experimentally show that already on conventional computing hardware, this leads to a significant speedup and memory reduction while maintaining a competitive performance level.
arXiv Detail & Related papers (2021-09-21T18:01:04Z) - Learning Graph Representations [0.0]
Graph Neural Networks (GNNs) are efficient ways to get insight into large dynamic graph datasets.
In this paper, we discuss the graph convolutional neural networks graph autoencoders and Social-temporal graph neural networks.
arXiv Detail & Related papers (2021-02-03T12:07:55Z) - Natural Graph Networks [80.77570956520482]
We show that the more general concept of naturality is sufficient for a graph network to be well-defined.
We define global and local natural graph networks, the latter of which are as scalable as conventional message passing graph neural networks.
arXiv Detail & Related papers (2020-07-16T14:19:06Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Differentiable Graph Module (DGM) for Graph Convolutional Networks [44.26665239213658]
Differentiable Graph Module (DGM) is a learnable function that predicts edge probabilities in the graph which are optimal for the downstream task.
We provide an extensive evaluation of applications from the domains of healthcare (disease prediction), brain imaging (age prediction), computer graphics (3D point cloud segmentation), and computer vision (zero-shot learning)
arXiv Detail & Related papers (2020-02-11T12:59:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.