Expectation-Complete Graph Representations with Homomorphisms
- URL: http://arxiv.org/abs/2306.05838v2
- Date: Thu, 24 Aug 2023 06:59:44 GMT
- Title: Expectation-Complete Graph Representations with Homomorphisms
- Authors: Pascal Welke, Maximilian Thiessen, Fabian Jogl, Thomas G\"artner
- Abstract summary: We are interested in efficient alternatives that become arbitrarily expressive with increasing resources.
Our approach is based on Lov'asz' characterisation of graph isomorphism through an infinite dimensional vector of homomorphism counts.
- Score: 5.939858158928473
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We investigate novel random graph embeddings that can be computed in expected
polynomial time and that are able to distinguish all non-isomorphic graphs in
expectation. Previous graph embeddings have limited expressiveness and either
cannot distinguish all graphs or cannot be computed efficiently for every
graph. To be able to approximate arbitrary functions on graphs, we are
interested in efficient alternatives that become arbitrarily expressive with
increasing resources. Our approach is based on Lov\'asz' characterisation of
graph isomorphism through an infinite dimensional vector of homomorphism
counts. Our empirical evaluation shows competitive results on several benchmark
graph learning tasks.
Related papers
- Towards Subgraph Isomorphism Counting with Graph Kernels [45.687427850611314]
Subgraph isomorphism is known as #P-complete and requires exponential time to find the accurate solution.
We pioneeringly investigate the potential in counting subgraph isomorphisms and explore the augmentation of kernel capability through various variants.
We present the results of extensive experiments to demonstrate the effectiveness of the enhanced graph kernels and discuss promising directions for future research.
arXiv Detail & Related papers (2024-05-13T06:33:06Z) - MGNet: Learning Correspondences via Multiple Graphs [78.0117352211091]
Learning correspondences aims to find correct correspondences from the initial correspondence set with an uneven correspondence distribution and a low inlier rate.
Recent advances usually use graph neural networks (GNNs) to build a single type of graph or stack local graphs into the global one to complete the task.
We propose MGNet to effectively combine multiple complementary graphs.
arXiv Detail & Related papers (2024-01-10T07:58:44Z) - The Graph Lottery Ticket Hypothesis: Finding Sparse, Informative Graph
Structure [18.00833762891405]
Graph Lottery Ticket (GLT) Hypothesis: There is an extremely sparse backbone for every graph.
We study 8 key metrics of interest that directly influence the performance of graph learning algorithms.
We propose a straightforward and efficient algorithm for finding these GLTs in arbitrary graphs.
arXiv Detail & Related papers (2023-12-08T00:24:44Z) - PlanE: Representation Learning over Planar Graphs [9.697671872347131]
This work is inspired by the classical planar graph isomorphism algorithm of Hopcroft and Tarjan.
PlanE includes architectures which can learn complete invariants over planar graphs while remaining practically scalable.
arXiv Detail & Related papers (2023-07-03T17:45:01Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Graph Coloring: Comparing Cluster Graphs to Factor Graphs [0.0]
We present a means of formulating and solving graph coloring problems with probabilistic graphical models.
In contrast to the prevalent literature that uses factor graphs for this purpose, we instead approach it from a cluster graph perspective.
arXiv Detail & Related papers (2021-10-05T13:57:30Z) - GraphOpt: Learning Optimization Models of Graph Formation [72.75384705298303]
We propose an end-to-end framework that learns an implicit model of graph structure formation and discovers an underlying optimization mechanism.
The learned objective can serve as an explanation for the observed graph properties, thereby lending itself to transfer across different graphs within a domain.
GraphOpt poses link formation in graphs as a sequential decision-making process and solves it using maximum entropy inverse reinforcement learning algorithm.
arXiv Detail & Related papers (2020-07-07T16:51:39Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Universal Function Approximation on Graphs [0.0]
We produce a framework for constructing universal function approximators on graph isomorphism classes.
We show how this allows us to achieve state-of-the-art performance on four different well-known datasets.
arXiv Detail & Related papers (2020-03-14T21:12:33Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.