Edgeformers: Graph-Empowered Transformers for Representation Learning on
Textual-Edge Networks
- URL: http://arxiv.org/abs/2302.11050v1
- Date: Tue, 21 Feb 2023 23:09:17 GMT
- Title: Edgeformers: Graph-Empowered Transformers for Representation Learning on
Textual-Edge Networks
- Authors: Bowen Jin, Yu Zhang, Yu Meng, Jiawei Han
- Abstract summary: Edgeformers is a framework built upon graph-enhanced Transformers to perform edge and node representation learning.
We show that Edgeformers consistently outperform state-of-the-art baselines in edge classification and link prediction.
- Score: 30.49672654211631
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Edges in many real-world social/information networks are associated with rich
text information (e.g., user-user communications or user-product reviews).
However, mainstream network representation learning models focus on propagating
and aggregating node attributes, lacking specific designs to utilize text
semantics on edges. While there exist edge-aware graph neural networks, they
directly initialize edge attributes as a feature vector, which cannot fully
capture the contextualized text semantics of edges. In this paper, we propose
Edgeformers, a framework built upon graph-enhanced Transformers, to perform
edge and node representation learning by modeling texts on edges in a
contextualized way. Specifically, in edge representation learning, we inject
network information into each Transformer layer when encoding edge texts; in
node representation learning, we aggregate edge representations through an
attention mechanism within each node's ego-graph. On five public datasets from
three different domains, Edgeformers consistently outperform state-of-the-art
baselines in edge classification and link prediction, demonstrating the
efficacy in learning edge and node representations, respectively.
Related papers
- Bridging Local Details and Global Context in Text-Attributed Graphs [62.522550655068336]
GraphBridge is a framework that bridges local and global perspectives by leveraging contextual textual information.
Our method achieves state-of-theart performance, while our graph-aware token reduction module significantly enhances efficiency and solves scalability issues.
arXiv Detail & Related papers (2024-06-18T13:35:25Z) - GAugLLM: Improving Graph Contrastive Learning for Text-Attributed Graphs with Large Language Models [33.3678293782131]
This work studies self-supervised graph learning for text-attributed graphs (TAGs)
We aim to improve view generation through language supervision.
This is driven by the prevalence of textual attributes in real applications, which complement graph structures with rich semantic information.
arXiv Detail & Related papers (2024-06-17T17:49:19Z) - Refined Edge Usage of Graph Neural Networks for Edge Prediction [51.06557652109059]
We propose a novel edge prediction paradigm named Edge-aware Message PassIng neuRal nEtworks (EMPIRE)
We first introduce an edge splitting technique to specify use of each edge where each edge is solely used as either the topology or the supervision.
In order to emphasize the differences between pairs connected by supervision edges and pairs unconnected, we further weight the messages to highlight the relative ones that can reflect the differences.
arXiv Detail & Related papers (2022-12-25T23:19:56Z) - TeKo: Text-Rich Graph Neural Networks with External Knowledge [75.91477450060808]
We propose a novel text-rich graph neural network with external knowledge (TeKo)
We first present a flexible heterogeneous semantic network that incorporates high-quality entities.
We then introduce two types of external knowledge, that is, structured triplets and unstructured entity description.
arXiv Detail & Related papers (2022-06-15T02:33:10Z) - Using virtual edges to extract keywords from texts modeled as complex
networks [0.1611401281366893]
We modeled texts co-occurrence networks, where nodes are words and edges are established by contextual or semantical similarity.
We found that, in fact, the use of virtual edges can improve the discriminability of co-occurrence networks.
arXiv Detail & Related papers (2022-05-04T16:43:03Z) - ME-GCN: Multi-dimensional Edge-Embedded Graph Convolutional Networks for
Semi-supervised Text Classification [6.196387205547024]
This paper introduces the ME-GCN (Multi-dimensional Edge-enhanced Graph Convolutional Networks) for semi-supervised text classification.
Our proposed model has significantly outperformed the state-of-the-art methods across eight benchmark datasets.
arXiv Detail & Related papers (2022-04-10T07:05:12Z) - GraphFormers: GNN-nested Transformers for Representation Learning on
Textual Graph [53.70520466556453]
We propose GraphFormers, where layerwise GNN components are nested alongside the transformer blocks of language models.
With the proposed architecture, the text encoding and the graph aggregation are fused into an iterative workflow.
In addition, a progressive learning strategy is introduced, where the model is successively trained on manipulated data and original data to reinforce its capability of integrating information on graph.
arXiv Detail & Related papers (2021-05-06T12:20:41Z) - Edge-Featured Graph Attention Network [7.0629162428807115]
We present edge-featured graph attention networks (EGATs) to extend the use of graph neural networks to those tasks learning on graphs with both node and edge features.
By reforming the model structure and the learning process, the new models can accept node and edge features as inputs, incorporate the edge information into feature representations, and iterate both node and edge features in a parallel but mutual way.
arXiv Detail & Related papers (2021-01-19T15:08:12Z) - Structure-Augmented Text Representation Learning for Efficient Knowledge
Graph Completion [53.31911669146451]
Human-curated knowledge graphs provide critical supportive information to various natural language processing tasks.
These graphs are usually incomplete, urging auto-completion of them.
graph embedding approaches, e.g., TransE, learn structured knowledge via representing graph elements into dense embeddings.
textual encoding approaches, e.g., KG-BERT, resort to graph triple's text and triple-level contextualized representations.
arXiv Detail & Related papers (2020-04-30T13:50:34Z) - EdgeNets:Edge Varying Graph Neural Networks [179.99395949679547]
This paper puts forth a general framework that unifies state-of-the-art graph neural networks (GNNs) through the concept of EdgeNet.
An EdgeNet is a GNN architecture that allows different nodes to use different parameters to weigh the information of different neighbors.
This is a general linear and local operation that a node can perform and encompasses under one formulation all existing graph convolutional neural networks (GCNNs) as well as graph attention networks (GATs)
arXiv Detail & Related papers (2020-01-21T15:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.