Deep Neural Network for DrawiNg Networks, (DNN)^2
- URL: http://arxiv.org/abs/2108.03632v2
- Date: Tue, 10 Aug 2021 13:30:10 GMT
- Title: Deep Neural Network for DrawiNg Networks, (DNN)^2
- Authors: Loann Giovannangeli, Frederic Lalanne, David Auber, Romain Giot and
Romain Bourqui
- Abstract summary: We present a novel graph drawing framework called (DNN)2, Deep Neural Network for DrawiNg Networks.
We show that (DNN)2 performs well and are encouraging as the Deep Learning approach to Graph Drawing is novel and many leads for future works are identified.
- Score: 1.5749416770494706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: By leveraging recent progress of stochastic gradient descent methods, several
works have shown that graphs could be efficiently laid out through the
optimization of a tailored objective function. In the meantime, Deep Learning
(DL) techniques achieved great performances in many applications. We
demonstrate that it is possible to use DL techniques to learn a graph-to-layout
sequence of operations thanks to a graph-related objective function. In this
paper, we present a novel graph drawing framework called (DNN)^2: Deep Neural
Network for DrawiNg Networks. Our method uses Graph Convolution Networks to
learn a model. Learning is achieved by optimizing a graph topology related loss
function that evaluates (DNN)^2 generated layouts during training. Once
trained, the (DNN)^ model is able to quickly lay any input graph out. We
experiment (DNN)^2 and statistically compare it to optimization-based and
regular graph layout algorithms. The results show that (DNN)^2 performs well
and are encouraging as the Deep Learning approach to Graph Drawing is novel and
many leads for future works are identified.
Related papers
- SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Neighbor2Seq: Deep Learning on Massive Graphs by Transforming Neighbors
to Sequences [55.329402218608365]
We propose the Neighbor2Seq to transform the hierarchical neighborhood of each node into a sequence.
We evaluate our method on a massive graph with more than 111 million nodes and 1.6 billion edges.
Results show that our proposed method is scalable to massive graphs and achieves superior performance across massive and medium-scale graphs.
arXiv Detail & Related papers (2022-02-07T16:38:36Z) - Learning to Evolve on Dynamic Graphs [5.1521870302904125]
Learning to Evolve on Dynamic Graphs (LEDG) is a novel algorithm that jointly learns graph information and time information.
LEDG is model-agnostic and can train any message passing based graph neural network (GNN) on dynamic graphs.
arXiv Detail & Related papers (2021-11-13T04:09:30Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - GraphTheta: A Distributed Graph Neural Network Learning System With
Flexible Training Strategy [5.466414428765544]
We present a new distributed graph learning system GraphTheta.
It supports multiple training strategies and enables efficient and scalable learning on big graphs.
This work represents the largest edge-attributed GNN learning task conducted on a billion-scale network in the literature.
arXiv Detail & Related papers (2021-04-21T14:51:33Z) - Lifelong Graph Learning [6.282881904019272]
We bridge graph learning and lifelong learning by converting a continual graph learning problem to a regular graph learning problem.
We show that feature graph networks (FGN) achieve superior performance in two applications, i.e., lifelong human action recognition with wearable devices and feature matching.
arXiv Detail & Related papers (2020-09-01T18:21:34Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.