Kernel-based Graph Learning from Smooth Signals: A Functional Viewpoint
- URL: http://arxiv.org/abs/2008.10065v1
- Date: Sun, 23 Aug 2020 16:04:23 GMT
- Title: Kernel-based Graph Learning from Smooth Signals: A Functional Viewpoint
- Authors: Xingyue Pu, Siu Lun Chau, Xiaowen Dong and Dino Sejdinovic
- Abstract summary: We propose a novel graph learning framework that incorporates the node-side and observation-side information.
We use graph signals as functions in the reproducing kernel Hilbert space associated with a Kronecker product kernel.
We develop a novel graph-based regularisation method which, when combined with the Kronecker product kernel, enables our model to capture both the dependency explained by the graph and the dependency due to graph signals.
- Score: 15.577175610442351
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of graph learning concerns the construction of an explicit
topological structure revealing the relationship between nodes representing
data entities, which plays an increasingly important role in the success of
many graph-based representations and algorithms in the field of machine
learning and graph signal processing. In this paper, we propose a novel graph
learning framework that incorporates the node-side and observation-side
information, and in particular the covariates that help to explain the
dependency structures in graph signals. To this end, we consider graph signals
as functions in the reproducing kernel Hilbert space associated with a
Kronecker product kernel, and integrate functional learning with
smoothness-promoting graph learning to learn a graph representing the
relationship between nodes. The functional learning increases the robustness of
graph learning against missing and incomplete information in the graph signals.
In addition, we develop a novel graph-based regularisation method which, when
combined with the Kronecker product kernel, enables our model to capture both
the dependency explained by the graph and the dependency due to graph signals
observed under different but related circumstances, e.g. different points in
time. The latter means the graph signals are free from the i.i.d. assumptions
required by the classical graph learning models. Experiments on both synthetic
and real-world data show that our methods outperform the state-of-the-art
models in learning a meaningful graph topology from graph signals, in
particular under heavy noise, missing values, and multiple dependency.
Related papers
- Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Learning Product Graphs from Spectral Templates [3.04585143845864]
Graph Learning (GL) is at the core of inference and analysis of connections in data mining and machine learning (ML)
We propose learning product (high dimensional) graphs from product spectral templates with significantly reduced complexity.
In contrast to the rare current approaches, our approach can learn all types of product graphs without knowing the type of graph products.
arXiv Detail & Related papers (2022-11-05T12:28:11Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Graph-Time Convolutional Neural Networks [9.137554315375919]
We represent spatial relationships through product graphs with a first principle graph-time convolutional neural network (GTCNN)
We develop a graph-time convolutional filter by following the shift-and-sumtemporal operator to learn higher-level features over the product graph.
We develop a zero-pad pooling that preserves the spatial graph while reducing the number of active nodes and the parameters.
arXiv Detail & Related papers (2021-03-02T14:03:44Z) - GraphOpt: Learning Optimization Models of Graph Formation [72.75384705298303]
We propose an end-to-end framework that learns an implicit model of graph structure formation and discovers an underlying optimization mechanism.
The learned objective can serve as an explanation for the observed graph properties, thereby lending itself to transfer across different graphs within a domain.
GraphOpt poses link formation in graphs as a sequential decision-making process and solves it using maximum entropy inverse reinforcement learning algorithm.
arXiv Detail & Related papers (2020-07-07T16:51:39Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.