Characteristic Functions on Graphs: Birds of a Feather, from Statistical
Descriptors to Parametric Models
- URL: http://arxiv.org/abs/2005.07959v2
- Date: Sun, 16 Aug 2020 16:21:09 GMT
- Title: Characteristic Functions on Graphs: Birds of a Feather, from Statistical
Descriptors to Parametric Models
- Authors: Benedek Rozemberczki and Rik Sarkar
- Abstract summary: We introduce FEATHER, a computationally efficient algorithm to calculate a specific variant of characteristic functions.
We argue that features extracted by FEATHER are useful for node level machine learning tasks.
Experiments on real world large datasets show that our proposed algorithm creates high quality representations.
- Score: 8.147652597876862
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a flexible notion of characteristic functions
defined on graph vertices to describe the distribution of vertex features at
multiple scales. We introduce FEATHER, a computationally efficient algorithm to
calculate a specific variant of these characteristic functions where the
probability weights of the characteristic function are defined as the
transition probabilities of random walks. We argue that features extracted by
this procedure are useful for node level machine learning tasks. We discuss the
pooling of these node representations, resulting in compact descriptors of
graphs that can serve as features for graph classification algorithms. We
analytically prove that FEATHER describes isomorphic graphs with the same
representation and exhibits robustness to data corruption. Using the node
feature characteristic functions we define parametric models where evaluation
points of the functions are learned parameters of supervised classifiers.
Experiments on real world large datasets show that our proposed algorithm
creates high quality representations, performs transfer learning efficiently,
exhibits robustness to hyperparameter changes, and scales linearly with the
input size.
Related papers
- Linear Transformer Topological Masking with Graph Random Features [52.717865653036796]
We show how to parameterise topological masks as a learnable function of a weighted adjacency matrix.
Our efficient masking algorithms provide strong performance gains for tasks on image and point cloud data.
arXiv Detail & Related papers (2024-10-04T14:24:06Z) - Bures-Wasserstein Means of Graphs [60.42414991820453]
We propose a novel framework for defining a graph mean via embeddings in the space of smooth graph signal distributions.
By finding a mean in this embedding space, we can recover a mean graph that preserves structural information.
We establish the existence and uniqueness of the novel graph mean, and provide an iterative algorithm for computing it.
arXiv Detail & Related papers (2023-05-31T11:04:53Z) - Creating generalizable downstream graph models with random projections [22.690120515637854]
We investigate graph representation learning approaches that enable models to generalize across graphs.
We show that using random projections to estimate multiple powers of the transition matrix allows us to build a set of isomorphism-invariant features.
The resulting features can be used to recover enough information about the local neighborhood of a node to enable inference with relevance competitive to other approaches.
arXiv Detail & Related papers (2023-02-17T14:27:00Z) - Neural Eigenfunctions Are Structured Representation Learners [93.53445940137618]
This paper introduces a structured, adaptive-length deep representation called Neural Eigenmap.
We show that, when the eigenfunction is derived from positive relations in a data augmentation setup, applying NeuralEF results in an objective function.
We demonstrate using such representations as adaptive-length codes in image retrieval systems.
arXiv Detail & Related papers (2022-10-23T07:17:55Z) - The PWLR Graph Representation: A Persistent Weisfeiler-Lehman scheme
with Random Walks for Graph Classification [0.6999740786886536]
Persistent Weisfeiler-Lehman Random walk scheme (abbreviated as PWLR) for graph representations.
We generalize many variants of Weisfeiler-Lehman procedures, which are primarily used to embed graphs with discrete node labels.
arXiv Detail & Related papers (2022-08-29T08:50:37Z) - Accurate Node Feature Estimation with Structured Variational Graph
Autoencoder [21.436706159840014]
Given a graph with partial observations of node features, how can we estimate the missing features accurately?
We propose SVGA (Structured Variational Graph Autoencoder), an accurate method for feature estimation.
As a result, SVGA combines the advantages of probabilistic inference and graph neural networks, achieving state-of-the-art performance in real datasets.
arXiv Detail & Related papers (2022-06-09T14:07:45Z) - SSFG: Stochastically Scaling Features and Gradients for Regularizing
Graph Convolution Networks [7.075802972628797]
Repeatedly applying graph convolutions can cause the oversmoothing issue.
We present a regularization method to address this issue.
Our method effectively improves the overall performance of the baseline graph networks.
arXiv Detail & Related papers (2021-02-20T12:59:48Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - FuDGE: A Method to Estimate a Functional Differential Graph in a
High-Dimensional Setting [17.104487467949113]
We consider the problem of estimating the difference between two undirected functional graphical models with shared structures.
We first define a functional differential graph that captures the differences between two functional graphical models.
We then propose a method, FuDGE, that directly estimates the functional differential graph without first estimating each individual graph.
arXiv Detail & Related papers (2020-03-11T16:47:22Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z) - Invariant Feature Coding using Tensor Product Representation [75.62232699377877]
We prove that the group-invariant feature vector contains sufficient discriminative information when learning a linear classifier.
A novel feature model that explicitly consider group action is proposed for principal component analysis and k-means clustering.
arXiv Detail & Related papers (2019-06-05T07:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.