Hyperparameter-free and Explainable Whole Graph Embedding
- URL: http://arxiv.org/abs/2108.02113v1
- Date: Wed, 4 Aug 2021 15:30:52 GMT
- Title: Hyperparameter-free and Explainable Whole Graph Embedding
- Authors: Hao Wang, Yue Deng, Linyuan L\"u, Guanrong Chen
- Abstract summary: Graph representation learning attempts to learn a lower-dimensional representation vector for each node or the whole graph.
This paper proposes a new whole graph embedding method, combining the DHC (Degree, H-index and Coreness) theorem and Shannon Entropy (E)
The proposed approach has a good performance in lower-dimensional graph visualization.
- Score: 16.03671347701557
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many real-world complex systems can be described as graphs. For a large-scale
graph with low sparsity, a node's adjacency vector is a long and sparse
representation, limiting the practical utilization of existing machine learning
methods on nodal features. In practice, graph embedding (graph representation
learning) attempts to learn a lower-dimensional representation vector for each
node or the whole graph while maintaining the most basic information of graph.
Since various machine learning methods can efficiently process
lower-dimensional vectors, graph embedding has recently attracted a lot of
attention. However, most node embedding or whole graph embedding methods suffer
from the problem of having more sophisticated methodology, hyperparameter
optimization, and low explainability. This paper proposes a
hyperparameter-free, extensible, and explainable whole graph embedding method,
combining the DHC (Degree, H-index and Coreness) theorem and Shannon Entropy
(E), abbreviated as DHC-E. The new whole graph embedding scheme can obtain a
trade-off between the simplicity and the quality under some supervised
classification learning tasks, using molecular, social, and brain networks. In
addition, the proposed approach has a good performance in lower-dimensional
graph visualization. The new methodology is overall simple,
hyperparameter-free, extensible, and explainable for whole graph embedding with
promising potential for exploring graph classification, prediction, and
lower-dimensional graph visualization.
Related papers
- The Graph Lottery Ticket Hypothesis: Finding Sparse, Informative Graph
Structure [18.00833762891405]
Graph Lottery Ticket (GLT) Hypothesis: There is an extremely sparse backbone for every graph.
We study 8 key metrics of interest that directly influence the performance of graph learning algorithms.
We propose a straightforward and efficient algorithm for finding these GLTs in arbitrary graphs.
arXiv Detail & Related papers (2023-12-08T00:24:44Z) - Structure-free Graph Condensation: From Large-scale Graphs to Condensed
Graph-free Data [91.27527985415007]
Existing graph condensation methods rely on the joint optimization of nodes and structures in the condensed graph.
We advocate a new Structure-Free Graph Condensation paradigm, named SFGC, to distill a large-scale graph into a small-scale graph node set.
arXiv Detail & Related papers (2023-06-05T07:53:52Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - A Unified Framework for Optimization-Based Graph Coarsening [5.720402020129441]
Given a large graph, graph coarsening aims to learn a smaller-tractable graph while preserving the properties of the originally given graph.
The proposed framework lies in the unification of graph learning and dimensionality reduction.
It is established that the learned coarsened graph is $epsin(0,1)$ similar to the original graph.
arXiv Detail & Related papers (2022-10-02T06:31:42Z) - ARIEL: Adversarial Graph Contrastive Learning [51.14695794459399]
ARIEL consistently outperforms the current graph contrastive learning methods for both node-level and graph-level classification tasks.
ARIEL is more robust in the face of adversarial attacks.
arXiv Detail & Related papers (2022-08-15T01:24:42Z) - GraphHD: Efficient graph classification using hyperdimensional computing [58.720142291102135]
We present a baseline approach for graph classification with HDC.
We evaluate GraphHD on real-world graph classification problems.
Our results show that when compared to the state-of-the-art Graph Neural Networks (GNNs) the proposed model achieves comparable accuracy.
arXiv Detail & Related papers (2022-05-16T17:32:58Z) - Multi-Level Graph Contrastive Learning [38.022118893733804]
We propose a Multi-Level Graph Contrastive Learning (MLGCL) framework for learning robust representation of graph data by contrasting space views of graphs.
The original graph is first-order approximation structure and contains uncertainty or error, while the $k$NN graph generated by encoding features preserves high-order proximity.
Extensive experiments indicate MLGCL achieves promising results compared with the existing state-of-the-art graph representation learning methods on seven datasets.
arXiv Detail & Related papers (2021-07-06T14:24:43Z) - Co-embedding of Nodes and Edges with Graph Neural Networks [13.020745622327894]
Graph embedding is a way to transform and encode the data structure in high dimensional and non-Euclidean feature space.
CensNet is a general graph embedding framework, which embeds both nodes and edges to a latent feature space.
Our approach achieves or matches the state-of-the-art performance in four graph learning tasks.
arXiv Detail & Related papers (2020-10-25T22:39:31Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z) - Learning Product Graphs Underlying Smooth Graph Signals [15.023662220197242]
This paper devises a method to learn structured graphs from data that are given in the form of product graphs.
To this end, first the graph learning problem is posed as a linear program, which (on average) outperforms the state-of-the-art graph learning algorithms.
arXiv Detail & Related papers (2020-02-26T03:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.