Universal Function Approximation on Graphs
- URL: http://arxiv.org/abs/2003.06706v3
- Date: Mon, 26 Oct 2020 07:58:11 GMT
- Title: Universal Function Approximation on Graphs
- Authors: Rickard Br\"uel-Gabrielsson
- Abstract summary: We produce a framework for constructing universal function approximators on graph isomorphism classes.
We show how this allows us to achieve state-of-the-art performance on four different well-known datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we produce a framework for constructing universal function
approximators on graph isomorphism classes. We prove how this framework comes
with a collection of theoretically desirable properties and enables novel
analysis. We show how this allows us to achieve state-of-the-art performance on
four different well-known datasets in graph classification and separate classes
of graphs that other graph-learning methods cannot. Our approach is inspired by
persistent homology, dependency parsing for NLP, and multivalued functions. The
complexity of the underlying algorithm is O(#edges x #nodes) and code is
publicly available
(https://github.com/bruel-gabrielsson/universal-function-approximation-on-graphs).
Related papers
- Multi-View Stochastic Block Models [34.55723218769512]
We formalize a new family of models, called textitmulti-view block models that captures this setting.
For this model, we first study efficient algorithms that naively work on the union of multiple graphs.
Then, we introduce a new efficient algorithm that provably outperforms previous approaches by analyzing the structure of each graph separately.
arXiv Detail & Related papers (2024-06-07T11:45:31Z) - MGNet: Learning Correspondences via Multiple Graphs [78.0117352211091]
Learning correspondences aims to find correct correspondences from the initial correspondence set with an uneven correspondence distribution and a low inlier rate.
Recent advances usually use graph neural networks (GNNs) to build a single type of graph or stack local graphs into the global one to complete the task.
We propose MGNet to effectively combine multiple complementary graphs.
arXiv Detail & Related papers (2024-01-10T07:58:44Z) - Expectation-Complete Graph Representations with Homomorphisms [5.939858158928473]
We are interested in efficient alternatives that become arbitrarily expressive with increasing resources.
Our approach is based on Lov'asz' characterisation of graph isomorphism through an infinite dimensional vector of homomorphism counts.
arXiv Detail & Related papers (2023-06-09T12:12:07Z) - CGMN: A Contrastive Graph Matching Network for Self-Supervised Graph
Similarity Learning [65.1042892570989]
We propose a contrastive graph matching network (CGMN) for self-supervised graph similarity learning.
We employ two strategies, namely cross-view interaction and cross-graph interaction, for effective node representation learning.
We transform node representations into graph-level representations via pooling operations for graph similarity computation.
arXiv Detail & Related papers (2022-05-30T13:20:26Z) - Joint Graph Learning and Matching for Semantic Feature Correspondence [69.71998282148762]
We propose a joint emphgraph learning and matching network, named GLAM, to explore reliable graph structures for boosting graph matching.
The proposed method is evaluated on three popular visual matching benchmarks (Pascal VOC, Willow Object and SPair-71k)
It outperforms previous state-of-the-art graph matching methods by significant margins on all benchmarks.
arXiv Detail & Related papers (2021-09-01T08:24:02Z) - Group Contrastive Self-Supervised Learning on Graphs [101.45974132613293]
We study self-supervised learning on graphs using contrastive methods.
We argue that contrasting graphs in multiple subspaces enables graph encoders to capture more abundant characteristics.
arXiv Detail & Related papers (2021-07-20T22:09:21Z) - Multi-Level Graph Contrastive Learning [38.022118893733804]
We propose a Multi-Level Graph Contrastive Learning (MLGCL) framework for learning robust representation of graph data by contrasting space views of graphs.
The original graph is first-order approximation structure and contains uncertainty or error, while the $k$NN graph generated by encoding features preserves high-order proximity.
Extensive experiments indicate MLGCL achieves promising results compared with the existing state-of-the-art graph representation learning methods on seven datasets.
arXiv Detail & Related papers (2021-07-06T14:24:43Z) - Structured Graph Learning for Clustering and Semi-supervised
Classification [74.35376212789132]
We propose a graph learning framework to preserve both the local and global structure of data.
Our method uses the self-expressiveness of samples to capture the global structure and adaptive neighbor approach to respect the local structure.
Our model is equivalent to a combination of kernel k-means and k-means methods under certain condition.
arXiv Detail & Related papers (2020-08-31T08:41:20Z) - Training Sensitivity in Graph Isomorphism Network [2.487445341407889]
Graph neural network (GNN) is a popular tool to learn the lower-dimensional representation of a graph.
This paper studies various alternative functions for a respective module using a diverse set of benchmark datasets.
arXiv Detail & Related papers (2020-08-19T03:50:28Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.