Graph Neural Networks for Graph Drawing
- URL: http://arxiv.org/abs/2109.10061v1
- Date: Tue, 21 Sep 2021 09:58:02 GMT
- Title: Graph Neural Networks for Graph Drawing
- Authors: Matteo Tiezzi, Gabriele Ciravegna and Marco Gori
- Abstract summary: We propose a novel framework for the development of Graph Neural Drawers (GND)
GNDs rely on neural computation for constructing efficient and complex maps.
We prove that this mechanism can be guided by loss functions computed by means of Feedforward Neural Networks.
- Score: 17.983238300054527
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Drawing techniques have been developed in the last few years with the
purpose of producing aesthetically pleasing node-link layouts. Recently, the
employment of differentiable loss functions has paved the road to the massive
usage of Gradient Descent and related optimization algorithms. In this paper,
we propose a novel framework for the development of Graph Neural Drawers (GND),
machines that rely on neural computation for constructing efficient and complex
maps. GND are Graph Neural Networks (GNNs) whose learning process can be driven
by any provided loss function, such as the ones commonly employed in Graph
Drawing. Moreover, we prove that this mechanism can be guided by loss functions
computed by means of Feedforward Neural Networks, on the basis of supervision
hints that express beauty properties, like the minimization of crossing edges.
In this context, we show that GNNs can nicely be enriched by positional
features to deal also with unlabelled vertexes. We provide a proof-of-concept
by constructing a loss function for the edge-crossing and provide quantitative
and qualitative comparisons among different GNN models working under the
proposed framework.
Related papers
- Graph Structure Prompt Learning: A Novel Methodology to Improve Performance of Graph Neural Networks [13.655670509818144]
We propose a novel Graph structure Prompt Learning method (GPL) to enhance the training of Graph networks (GNNs)
GPL employs task-independent graph structure losses to encourage GNNs to learn intrinsic graph characteristics while simultaneously solving downstream tasks.
In experiments on eleven real-world datasets, after being trained by neural prediction, GNNs significantly outperform their original performance on node classification, graph classification, and edge tasks.
arXiv Detail & Related papers (2024-07-16T03:59:18Z) - Probability Passing for Graph Neural Networks: Graph Structure and Representations Joint Learning [8.392545965667288]
Graph Neural Networks (GNNs) have achieved notable success in the analysis of non-Euclidean data across a wide range of domains.
To solve this problem, Latent Graph Inference (LGI) is proposed to infer a task-specific latent structure by computing similarity or edge probability of node features.
We introduce a novel method called Probability Passing to refine the generated graph structure by aggregating edge probabilities of neighboring nodes.
arXiv Detail & Related papers (2024-07-15T13:01:47Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Gradient Gating for Deep Multi-Rate Learning on Graphs [62.25886489571097]
We present Gradient Gating (G$2$), a novel framework for improving the performance of Graph Neural Networks (GNNs)
Our framework is based on gating the output of GNN layers with a mechanism for multi-rate flow of message passing information across nodes of the underlying graph.
arXiv Detail & Related papers (2022-10-02T13:19:48Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Graph Partner Neural Networks for Semi-Supervised Learning on Graphs [16.489177915147785]
Graph Convolutional Networks (GCNs) are powerful for processing graphstructured data and have achieved state-of-the-art performance in several tasks such as node classification, link prediction, and graph classification.
It is inevitable for deep GCNs to suffer from an over-smoothing issue that the representations of nodes will tend to be indistinguishable after repeated graph convolution operations.
We propose the Graph Partner Neural Network (GPNN) which incorporates a de- parameterized GCN and a parameter-sharing scheme.
arXiv Detail & Related papers (2021-10-18T10:56:56Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Implicit Graph Neural Networks [46.0589136729616]
We propose a graph learning framework called Implicit Graph Neural Networks (IGNN)
IGNNs consistently capture long-range dependencies and outperform state-of-the-art GNN models.
arXiv Detail & Related papers (2020-09-14T06:04:55Z) - Scattering GCN: Overcoming Oversmoothness in Graph Convolutional
Networks [0.0]
Graph convolutional networks (GCNs) have shown promising results in processing graph data by extracting structure-aware features.
Here, we propose to augment conventional GCNs with geometric scattering transforms and residual convolutions.
The former enables band-pass filtering of graph signals, thus alleviating the so-called oversmoothing often encountered in GCNs.
arXiv Detail & Related papers (2020-03-18T18:03:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.