GRAF: Graph Attention-aware Fusion Networks
- URL: http://arxiv.org/abs/2303.16781v2
- Date: Thu, 17 Aug 2023 16:41:03 GMT
- Title: GRAF: Graph Attention-aware Fusion Networks
- Authors: Ziynet Nesibe Kesimoglu, Serdar Bozdag
- Abstract summary: A large number of real-world networks include multiple types of nodes and edges.
Graph Neural Network (GNN) emerged as a deep learning framework to generate node and graph embeddings for downstream machine learning tasks.
We present a computational approach named GRAF (Graph Attention-aware Fusion Networks) utilizing GNN-based approaches on multiple networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A large number of real-world networks include multiple types of nodes and
edges. Graph Neural Network (GNN) emerged as a deep learning framework to
generate node and graph embeddings for downstream machine learning tasks.
However, popular GNN-based architectures operate on single homogeneous
networks. Enabling them to work on multiple networks brings additional
challenges due to the heterogeneity of the networks and the multiplicity of the
existing associations. In this study, we present a computational approach named
GRAF (Graph Attention-aware Fusion Networks) utilizing GNN-based approaches on
multiple networks with the help of attention mechanisms and network fusion.
Using attention-based neighborhood aggregation, GRAF learns the importance of
each neighbor per node (called node-level attention) followed by the importance
of association (called association-level attention). Then, GRAF processes a
network fusion step weighing each edge according to learned node- and
association-level attentions. Considering that the fused network could be a
highly dense network with many weak edges depending on the given input
networks, we included an edge elimination step with respect to edges' weights.
Finally, GRAF utilizes Graph Convolutional Network (GCN) on the fused network
and incorporates node features on graph-structured data for a node
classification or a similar downstream task. To demonstrate GRAF's
generalizability, we applied it to four datasets from different domains and
observed that GRAF outperformed or was on par with the baselines,
state-of-the-art methods, and its own variations for each node classification
task. Source code for our tool is publicly available at
https://github.com/bozdaglab/GRAF .
Related papers
- GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - Network Alignment with Transferable Graph Autoencoders [79.89704126746204]
We propose a novel graph autoencoder architecture designed to extract powerful and robust node embeddings.
We prove that the generated embeddings are associated with the eigenvalues and eigenvectors of the graphs.
Our proposed framework also leverages transfer learning and data augmentation to achieve efficient network alignment at a very large scale without retraining.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - Collaborative Graph Neural Networks for Attributed Network Embedding [63.39495932900291]
Graph neural networks (GNNs) have shown prominent performance on attributed network embedding.
We propose COllaborative graph Neural Networks--CONN, a tailored GNN architecture for network embedding.
arXiv Detail & Related papers (2023-07-22T04:52:27Z) - Learning to Identify Graphs from Node Trajectories in Multi-Robot
Networks [15.36505600407192]
We propose a learning-based approach that efficiently uncovers graph topologies with global convergence guarantees.
We demonstrate the effectiveness of our approach in identifying graphs in multi-robot formation and flocking tasks.
arXiv Detail & Related papers (2023-07-10T07:09:12Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - KGNN: Harnessing Kernel-based Networks for Semi-supervised Graph
Classification [13.419578861488226]
We propose a Kernel-based Graph Neural Network (KGNN) for semi-supervised graph classification.
We show that KGNN achieves impressive performance over competitive baselines.
arXiv Detail & Related papers (2022-05-21T10:03:46Z) - AdaGNN: A multi-modal latent representation meta-learner for GNNs based
on AdaBoosting [0.38073142980733]
Graph Neural Networks (GNNs) focus on extracting intrinsic network features.
We propose boosting-based meta learner for GNNs.
AdaGNN performs exceptionally well for applications with rich and diverse node neighborhood information.
arXiv Detail & Related papers (2021-08-14T03:07:26Z) - Edge-Featured Graph Attention Network [7.0629162428807115]
We present edge-featured graph attention networks (EGATs) to extend the use of graph neural networks to those tasks learning on graphs with both node and edge features.
By reforming the model structure and the learning process, the new models can accept node and edge features as inputs, incorporate the edge information into feature representations, and iterate both node and edge features in a parallel but mutual way.
arXiv Detail & Related papers (2021-01-19T15:08:12Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - EdgeNets:Edge Varying Graph Neural Networks [179.99395949679547]
This paper puts forth a general framework that unifies state-of-the-art graph neural networks (GNNs) through the concept of EdgeNet.
An EdgeNet is a GNN architecture that allows different nodes to use different parameters to weigh the information of different neighbors.
This is a general linear and local operation that a node can perform and encompasses under one formulation all existing graph convolutional neural networks (GCNNs) as well as graph attention networks (GATs)
arXiv Detail & Related papers (2020-01-21T15:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.