State of the Art and Potentialities of Graph-level Learning
- URL: http://arxiv.org/abs/2301.05860v3
- Date: Thu, 25 May 2023 07:03:59 GMT
- Title: State of the Art and Potentialities of Graph-level Learning
- Authors: Zhenyu Yang, Ge Zhang, Jia Wu, Jian Yang, Quan Z. Sheng, Shan Xue,
Chuan Zhou, Charu Aggarwal, Hao Peng, Wenbin Hu, Edwin Hancock, and Pietro
Li\`o
- Abstract summary: Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
- Score: 54.68482109186052
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graphs have a superior ability to represent relational data, like chemical
compounds, proteins, and social networks. Hence, graph-level learning, which
takes a set of graphs as input, has been applied to many tasks including
comparison, regression, classification, and more. Traditional approaches to
learning a set of graphs heavily rely on hand-crafted features, such as
substructures. But while these methods benefit from good interpretability, they
often suffer from computational bottlenecks as they cannot skirt the graph
isomorphism problem. Conversely, deep learning has helped graph-level learning
adapt to the growing scale of graphs by extracting features automatically and
encoding graphs into low-dimensional representations. As a result, these deep
graph learning methods have been responsible for many successes. Yet, there is
no comprehensive survey that reviews graph-level learning starting with
traditional learning and moving through to the deep learning approaches. This
article fills this gap and frames the representative algorithms into a
systematic taxonomy covering traditional learning, graph-level deep neural
networks, graph-level graph neural networks, and graph pooling. To ensure a
thoroughly comprehensive survey, the evolutions, interactions, and
communications between methods from four different branches of development are
also examined. This is followed by a brief review of the benchmark data sets,
evaluation metrics, and common downstream applications. The survey concludes
with a broad overview of 12 current and future directions in this booming
field.
Related papers
- Knowledge Probing for Graph Representation Learning [12.960185655357495]
We propose a novel graph probing framework (GraphProbe) to investigate and interpret whether the family of graph learning methods has encoded different levels of knowledge in graph representation learning.
Based on the intrinsic properties of graphs, we design three probes to systematically investigate the graph representation learning process from different perspectives.
We construct a thorough evaluation benchmark with nine representative graph learning methods from random walk based approaches, basic graph neural networks and self-supervised graph methods, and probe them on six benchmark datasets for node classification, link prediction and graph classification.
arXiv Detail & Related papers (2024-08-07T16:27:45Z) - A Survey of Imbalanced Learning on Graphs: Problems, Techniques, and
Future Directions [64.84521350148513]
Graphs represent interconnected structures prevalent in a myriad of real-world scenarios.
Effective graph analytics, such as graph learning methods, enables users to gain profound insights from graph data.
However, these methods often suffer from data imbalance, a common issue in graph data where certain segments possess abundant data while others are scarce.
This necessitates the emerging field of imbalanced learning on graphs, which aims to correct these data distribution skews for more accurate and representative learning outcomes.
arXiv Detail & Related papers (2023-08-26T09:11:44Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Graph Learning and Its Advancements on Large Language Models: A Holistic Survey [37.01696685233113]
This survey focuses on the most recent advancements in integrating graph learning with pre-trained language models.
We provide a holistic review that analyzes current works from the perspective of graph structure, and discusses the latest applications, trends, and challenges in graph learning.
arXiv Detail & Related papers (2022-12-17T22:05:07Z) - Graph-level Neural Networks: Current Progress and Future Directions [61.08696673768116]
Graph-level Neural Networks (GLNNs, deep learning-based graph-level learning methods) have been attractive due to their superiority in modeling high-dimensional data.
We propose a systematic taxonomy covering GLNNs upon deep neural networks, graph neural networks, and graph pooling.
arXiv Detail & Related papers (2022-05-31T06:16:55Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Graph Learning: A Survey [38.245120261668816]
We present a comprehensive overview on the state-of-the-art of graph learning.
Special attention is paid to four categories of existing graph learning methods, including graph signal processing, matrix factorization, random walk, and deep learning.
We examine graph learning applications in areas such as text, images, science, knowledge graphs, and optimization.
arXiv Detail & Related papers (2021-05-03T09:06:01Z) - Learning Graph Representations [0.0]
Graph Neural Networks (GNNs) are efficient ways to get insight into large dynamic graph datasets.
In this paper, we discuss the graph convolutional neural networks graph autoencoders and Social-temporal graph neural networks.
arXiv Detail & Related papers (2021-02-03T12:07:55Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z) - Machine Learning on Graphs: A Model and Comprehensive Taxonomy [22.73365477040205]
We bridge the gap between graph neural networks, network embedding and graph regularization models.
Specifically, we propose a Graph Decoder Model (GRAPHEDM), which generalizes popular algorithms for semi-supervised learning on graphs.
arXiv Detail & Related papers (2020-05-07T18:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.