Semi-Supervised Learning on Graphs with Feature-Augmented Graph Basis
Functions
- URL: http://arxiv.org/abs/2003.07646v1
- Date: Tue, 17 Mar 2020 11:21:43 GMT
- Title: Semi-Supervised Learning on Graphs with Feature-Augmented Graph Basis
Functions
- Authors: Wolfgang Erb
- Abstract summary: We study how initial kernels in a supervised learning regime can be augmented with additional information from known priors or from unsupervised learning outputs.
As generators of the positive definite kernels, we will focus on graph basis functions (GBF) that allow to include geometric information of the graph.
Using a regularized least squares (RLS) approach for machine learning, we will test the derived augmented kernels for the classification of data on graphs.
- Score: 7.6146285961466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For semi-supervised learning on graphs, we study how initial kernels in a
supervised learning regime can be augmented with additional information from
known priors or from unsupervised learning outputs. These augmented kernels are
constructed in a simple update scheme based on the Schur-Hadamard product of
the kernel with additional feature kernels. As generators of the positive
definite kernels we will focus on graph basis functions (GBF) that allow to
include geometric information of the graph via the graph Fourier transform.
Using a regularized least squares (RLS) approach for machine learning, we will
test the derived augmented kernels for the classification of data on graphs.
Related papers
- Cell Graph Transformer for Nuclei Classification [78.47566396839628]
We develop a cell graph transformer (CGT) that treats nodes and edges as input tokens to enable learnable adjacency and information exchange among all nodes.
Poorly features can lead to noisy self-attention scores and inferior convergence.
We propose a novel topology-aware pretraining method that leverages a graph convolutional network (GCN) to learn a feature extractor.
arXiv Detail & Related papers (2024-02-20T12:01:30Z) - Gaussian process regression with Sliced Wasserstein Weisfeiler-Lehman
graph kernels [0.0]
Supervised learning has recently garnered significant attention in the field of computational physics.
Traditionally, such datasets consist of inputs given as meshes with a large number of nodes representing the problem geometry.
This means the supervised learning model must be able to handle large and sparse graphs with continuous node attributes.
arXiv Detail & Related papers (2024-02-06T09:35:40Z) - General Graph Random Features [42.75616308187867]
We propose a novel random walk-based algorithm for unbiased estimation of arbitrary functions of a weighted adjacency matrix.
Our algorithm enjoys subquadratic time complexity with respect to the number of nodes, overcoming the notoriously prohibitive cubic scaling of exact graph kernel evaluation.
arXiv Detail & Related papers (2023-10-07T15:47:31Z) - Transductive Kernels for Gaussian Processes on Graphs [7.542220697870243]
We present a novel kernel for graphs with node feature data for semi-supervised learning.
The kernel is derived from a regularization framework by treating the graph and feature data as two spaces.
We show how numerous kernel-based models on graphs are instances of our design.
arXiv Detail & Related papers (2022-11-28T14:00:50Z) - Bringing Your Own View: Graph Contrastive Learning without Prefabricated
Data Augmentations [94.41860307845812]
Self-supervision is recently surging at its new frontier of graph learning.
GraphCL uses a prefabricated prior reflected by the ad-hoc manual selection of graph data augmentations.
We have extended the prefabricated discrete prior in the augmentation set, to a learnable continuous prior in the parameter space of graph generators.
We have leveraged both principles of information minimization (InfoMin) and information bottleneck (InfoBN) to regularize the learned priors.
arXiv Detail & Related papers (2022-01-04T15:49:18Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Convolutional Kernel Networks for Graph-Structured Data [37.13712126432493]
We introduce a family of multilayer graph kernels and establish new links between graph convolutional neural networks and kernel methods.
Our approach generalizes convolutional kernel networks to graph-structured data, by representing graphs as a sequence of kernel feature maps.
Our model can also be trained end-to-end on large-scale data, leading to new types of graph convolutional neural networks.
arXiv Detail & Related papers (2020-03-11T09:44:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.