DUPLEX: Dual GAT for Complex Embedding of Directed Graphs
- URL: http://arxiv.org/abs/2406.05391v2
- Date: Fri, 19 Jul 2024 12:04:29 GMT
- Title: DUPLEX: Dual GAT for Complex Embedding of Directed Graphs
- Authors: Zhaoru Ke, Hang Yu, Jianguo Li, Haipeng Zhang,
- Abstract summary: Current directed graph embedding methods build upon undirected techniques but often inadequately capture directed edge information.
We propose DUPlex, an inductive framework for complex embeddings of directed graphs.
DUPlex outperforms state-of-the-art models, especially for nodes with sparse connectivity.
- Score: 17.84142263305169
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current directed graph embedding methods build upon undirected techniques but often inadequately capture directed edge information, leading to challenges such as: (1) Suboptimal representations for nodes with low in/out-degrees, due to the insufficient neighbor interactions; (2) Limited inductive ability for representing new nodes post-training; (3) Narrow generalizability, as training is overly coupled with specific tasks. In response, we propose DUPLEX, an inductive framework for complex embeddings of directed graphs. It (1) leverages Hermitian adjacency matrix decomposition for comprehensive neighbor integration, (2) employs a dual GAT encoder for directional neighbor modeling, and (3) features two parameter-free decoders to decouple training from particular tasks. DUPLEX outperforms state-of-the-art models, especially for nodes with sparse connectivity, and demonstrates robust inductive capability and adaptability across various tasks. The code is available at https://github.com/alipay/DUPLEX.
Related papers
- VecFormer: Towards Efficient and Generalizable Graph Transformer with Graph Token Attention [61.96837866507746]
VecFormer is an efficient and highly generalizable model for node classification.<n>VecFormer outperforms the existing Graph Transformer in both performance and speed.
arXiv Detail & Related papers (2026-02-23T09:10:39Z) - LION: A Clifford Neural Paradigm for Multimodal-Attributed Graph Learning [36.90213853456115]
We propose LION to implement alignment-then-fusion in multimodal-attributed graphs.<n>We first construct a modality-aware geometric manifold grounded in Clifford algebra.<n>This geometric-induced high-order graph propagation efficiently achieves modality interaction, facilitating modality alignment.
arXiv Detail & Related papers (2026-01-29T09:30:36Z) - DDFI: Diverse and Distribution-aware Missing Feature Imputation via Two-step Reconstruction [22.492502807174237]
DDFI is a Diverse and Distribution-aware Missing Feature Imputation method.<n>It combines feature propagation with a graph-based Masked AutoEncoder.<n>It outperforms state-of-the-art methods under both transductive and inductive settings.
arXiv Detail & Related papers (2025-12-06T09:06:08Z) - Connectivity-Guided Sparsification of 2-FWL GNNs: Preserving Full Expressivity with Improved Efficiency [15.330129666665927]
We propose textbfCo-Sparsify, a connectivity-aware sparsification framework.<n>Our key insight is that 3-node interactions are expressively necessary only within emphbiconnected components.<n>We prove that Co-Sparsify is as expressive as the 2-FWL test.
arXiv Detail & Related papers (2025-11-16T23:46:54Z) - Fast State-Augmented Learning for Wireless Resource Allocation with Dual Variable Regression [83.27791109672927]
We show how a state-augmented graph neural network (GNN) parametrization for the resource allocation policy circumvents the drawbacks of the ubiquitous dual subgradient methods.<n>Lagrangian maximizing state-augmented policies are learned during the offline training phase.<n>We prove a convergence result and an exponential probability bound on the excursions of the dual function (iterate) optimality gaps.
arXiv Detail & Related papers (2025-06-23T15:20:58Z) - Learning Efficient and Generalizable Graph Retriever for Knowledge-Graph Question Answering [75.12322966980003]
Large Language Models (LLMs) have shown strong inductive reasoning ability across various domains.<n>Most existing RAG pipelines rely on unstructured text, limiting interpretability and structured reasoning.<n>Recent studies have explored integrating knowledge graphs with LLMs for knowledge graph question answering.<n>We propose RAPL, a novel framework for efficient and effective graph retrieval in KGQA.
arXiv Detail & Related papers (2025-06-11T12:03:52Z) - Simplifying DINO via Coding Rate Regularization [74.88963795406733]
DINO and DINOv2 are two model families being widely used to learn representations from unlabeled imagery data at large scales.
This work highlights the potential of using simplifying design principles to improve the empirical practice of deep learning.
arXiv Detail & Related papers (2025-02-14T18:58:04Z) - HG-Adapter: Improving Pre-Trained Heterogeneous Graph Neural Networks with Dual Adapters [53.97380482341493]
"pre-train, prompt-tuning" has demonstrated impressive performance for tuning pre-trained heterogeneous graph neural networks (HGNNs)
We propose a unified framework that combines two new adapters with potential labeled data extension to improve the generalization of pre-trained HGNN models.
arXiv Detail & Related papers (2024-11-02T06:43:54Z) - InstructG2I: Synthesizing Images from Multimodal Attributed Graphs [50.852150521561676]
We propose a graph context-conditioned diffusion model called InstructG2I.
InstructG2I first exploits the graph structure and multimodal information to conduct informative neighbor sampling.
A Graph-QFormer encoder adaptively encodes the graph nodes into an auxiliary set of graph prompts to guide the denoising process.
arXiv Detail & Related papers (2024-10-09T17:56:15Z) - Boosting Graph Neural Network Expressivity with Learnable Lanczos Constraints [7.605749412696919]
Graph Neural Networks (GNNs) excel in handling graph-structured data but often underperform in link prediction tasks.
We present a novel method to enhance the expressivity of GNNs by embedding induced subgraphs into the graph Laplacian matrix's eigenbasis.
We demonstrate the ability to distinguish graphs that are indistinguishable by 2-WL, while maintaining efficient time complexity.
arXiv Detail & Related papers (2024-08-22T12:22:00Z) - Bigraph Matching Weighted with Learnt Incentive Function for Multi-Robot
Task Allocation [5.248564173595024]
This paper develops a Graph Reinforcement Learning framework to learn the robustnesss or incentives for a bipartite graph matching approach to Multi-Robot Task Allocation.
The performance of this new bigraph matching approach augmented with a GRL-derived incentive is found to be at par with the original bigraph matching approach.
arXiv Detail & Related papers (2024-03-11T19:55:08Z) - Co-guiding for Multi-intent Spoken Language Understanding [53.30511968323911]
We propose a novel model termed Co-guiding Net, which implements a two-stage framework achieving the mutual guidances between the two tasks.
For the first stage, we propose single-task supervised contrastive learning, and for the second stage, we propose co-guiding supervised contrastive learning.
Experiment results on multi-intent SLU show that our model outperforms existing models by a large margin.
arXiv Detail & Related papers (2023-11-22T08:06:22Z) - Efficient Link Prediction via GNN Layers Induced by Negative Sampling [92.05291395292537]
Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories.
First, emphnode-wise architectures pre-compute individual embeddings for each node that are later combined by a simple decoder to make predictions.
Second, emphedge-wise methods rely on the formation of edge-specific subgraph embeddings to enrich the representation of pair-wise relationships.
arXiv Detail & Related papers (2023-10-14T07:02:54Z) - Graph Representation Learning Beyond Node and Homophily [2.8417100723094357]
This paper proposes PairE, a novel unsupervised graph embedding method using two paired nodes.
A multi-self-supervised autoencoder is designed to fulfill two pretext tasks: one retains the high-frequency signal better, and another enhances the representation of commonality.
Our experiments show that PairE outperforms the unsupervised state-of-the-art baselines.
arXiv Detail & Related papers (2022-03-03T08:27:09Z) - Uniting Heterogeneity, Inductiveness, and Efficiency for Graph
Representation Learning [68.97378785686723]
graph neural networks (GNNs) have greatly advanced the performance of node representation learning on graphs.
A majority class of GNNs are only designed for homogeneous graphs, leading to inferior adaptivity to the more informative heterogeneous graphs.
We propose a novel inductive, meta path-free message passing scheme that packs up heterogeneous node features with their associated edges from both low- and high-order neighbor nodes.
arXiv Detail & Related papers (2021-04-04T23:31:39Z) - Pair-view Unsupervised Graph Representation Learning [2.8650714782703366]
Low-dimension graph embeddings have proved extremely useful in various downstream tasks in large graphs.
This paper pro-poses PairE, a solution to use "pair", a higher level unit than a "node" as the core for graph embeddings.
Experiment results show that PairE consistently outperforms the state of baselines in all four downstream tasks.
arXiv Detail & Related papers (2020-12-11T04:09:47Z) - Inductive Link Prediction for Nodes Having Only Attribute Information [21.714834749122137]
In attributed graphs, both the structure and attribute information can be utilized for link prediction.
We propose a model called DEAL, which consists of three components: two node embedding encoders and one alignment mechanism.
Our proposed model significantly outperforms existing inductive link prediction methods, and also outperforms the state-of-the-art methods on transductive link prediction.
arXiv Detail & Related papers (2020-07-16T00:51:51Z) - Interpretable Deep Graph Generation with Node-Edge Co-Disentanglement [55.2456981313287]
We propose a new disentanglement enhancement framework for deep generative models for attributed graphs.
A novel variational objective is proposed to disentangle the above three types of latent factors, with novel architecture for node and edge deconvolutions.
Within each type, individual-factor-wise disentanglement is further enhanced, which is shown to be a generalization of the existing framework for images.
arXiv Detail & Related papers (2020-06-09T16:33:49Z) - Dual Graph Representation Learning [20.03747654879028]
Graph representation learning embeds nodes in large graphs as low-dimensional vectors.
We present a context-aware unsupervised dual encoding framework, textbfCADE, to generate representations of nodes.
arXiv Detail & Related papers (2020-02-25T04:50:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.