Multi-view adaptive graph convolutions for graph classification
- URL: http://arxiv.org/abs/2007.12450v1
- Date: Fri, 24 Jul 2020 11:14:24 GMT
- Title: Multi-view adaptive graph convolutions for graph classification
- Authors: Nikolas Adaloglou, Nicholas Vretos and Petros Daras
- Abstract summary: A novel multi-view methodology for graph-based neural networks is proposed.
Layers are used in an end-to-end graph neural network architecture for graph classification.
- Score: 20.10169385129154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, a novel multi-view methodology for graph-based neural networks
is proposed. A systematic and methodological adaptation of the key concepts of
classical deep learning methods such as convolution, pooling and multi-view
architectures is developed for the context of non-Euclidean manifolds. The aim
of the proposed work is to present a novel multi-view graph convolution layer,
as well as a new view pooling layer making use of: a) a new hybrid Laplacian
that is adjusted based on feature distance metric learning, b) multiple
trainable representations of a feature matrix of a graph, using trainable
distance matrices, adapting the notion of views to graphs and c) a multi-view
graph aggregation scheme called graph view pooling, in order to synthesise
information from the multiple generated views. The aforementioned layers are
used in an end-to-end graph neural network architecture for graph
classification and show competitive results to other state-of-the-art methods.
Related papers
- Graph Transformer GANs with Graph Masked Modeling for Architectural
Layout Generation [153.92387500677023]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The proposed graph Transformer encoder combines graph convolutions and self-attentions in a Transformer to model both local and global interactions.
We also propose a novel self-guided pre-training method for graph representation learning.
arXiv Detail & Related papers (2024-01-15T14:36:38Z) - Isomorphic-Consistent Variational Graph Auto-Encoders for Multi-Level
Graph Representation Learning [9.039193854524763]
We propose the Isomorphic-Consistent VGAE (IsoC-VGAE) for task-agnostic graph representation learning.
We first devise a decoding scheme to provide a theoretical guarantee of keeping the isomorphic consistency.
We then propose the Inverse Graph Neural Network (Inv-GNN) decoder as its intuitive realization.
arXiv Detail & Related papers (2023-12-09T10:16:53Z) - A Comprehensive Survey on Deep Graph Representation Learning [26.24869157855632]
Graph representation learning aims to encode high-dimensional sparse graph-structured data into low-dimensional dense vectors.
Traditional methods have limited model capacity which limits the learning performance.
Deep graph representation learning has shown great potential and advantages over shallow (traditional) methods.
arXiv Detail & Related papers (2023-04-11T08:23:52Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Multi-view graph structure learning using subspace merging on Grassmann
manifold [4.039245878626346]
We introduce a new graph structure learning approach using multi-view learning, named MV-GSL (Multi-View Graph Structure Learning)
We aggregate different graph structure learning methods using subspace merging on Grassmann manifold to improve the quality of the learned graph structures.
Our experiments show that the proposed method has promising performance compared to single and other combined graph structure learning methods.
arXiv Detail & Related papers (2022-04-11T17:01:05Z) - Effective and Efficient Graph Learning for Multi-view Clustering [173.8313827799077]
We propose an effective and efficient graph learning model for multi-view clustering.
Our method exploits the view-similar between graphs of different views by the minimization of tensor Schatten p-norm.
Our proposed algorithm is time-economical and obtains the stable results and scales well with the data size.
arXiv Detail & Related papers (2021-08-15T13:14:28Z) - Diversified Multiscale Graph Learning with Graph Self-Correction [55.43696999424127]
We propose a diversified multiscale graph learning model equipped with two core ingredients.
A graph self-correction (GSC) mechanism to generate informative embedded graphs, and a diversity boosting regularizer (DBR) to achieve a comprehensive characterization of the input graph.
Experiments on popular graph classification benchmarks show that the proposed GSC mechanism leads to significant improvements over state-of-the-art graph pooling methods.
arXiv Detail & Related papers (2021-03-17T16:22:24Z) - Multiview Variational Graph Autoencoders for Canonical Correlation
Analysis [23.30313704251483]
We present a novel multiview canonical correlation analysis model based on a variational approach.
This is the first nonlinear model that takes into account the available graph-based geometric constraints.
It is scalable for processing large scale datasets with multiple views.
arXiv Detail & Related papers (2020-10-30T09:08:05Z) - Representation Learning of Graphs Using Graph Convolutional Multilayer
Networks Based on Motifs [17.823543937167848]
mGCMN is a novel framework which utilizes node feature information and the higher order local structure of the graph.
It will greatly improve the learning efficiency of the graph neural network and promote a brand-new learning mode establishment.
arXiv Detail & Related papers (2020-07-31T04:18:20Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.