Segmented Graph-Bert for Graph Instance Modeling
- URL: http://arxiv.org/abs/2002.03283v1
- Date: Sun, 9 Feb 2020 04:55:07 GMT
- Title: Segmented Graph-Bert for Graph Instance Modeling
- Authors: Jiawei Zhang
- Abstract summary: We will examine the effectiveness of GRAPH-BERT on graph instance representation learning.
In this paper, we re-design it with a segmented architecture instead, which is also named as SEG-BERT.
We have tested the effectiveness of SEG-BERT with experiments on seven graph instance benchmark datasets.
- Score: 10.879701971582502
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In graph instance representation learning, both the diverse graph instance
sizes and the graph node orderless property have been the major obstacles that
render existing representation learning models fail to work. In this paper, we
will examine the effectiveness of GRAPH-BERT on graph instance representation
learning, which was designed for node representation learning tasks originally.
To adapt GRAPH-BERT to the new problem settings, we re-design it with a
segmented architecture instead, which is also named as SEG-BERT (Segmented
GRAPH-BERT) for reference simplicity in this paper. SEG-BERT involves no
node-order-variant inputs or functional components anymore, and it can handle
the graph node orderless property naturally. What's more, SEG-BERT has a
segmented architecture and introduces three different strategies to unify the
graph instance sizes, i.e., full-input, padding/pruning and segment shifting,
respectively. SEG-BERT is pre-trainable in an unsupervised manner, which can be
further transferred to new tasks directly or with necessary fine-tuning. We
have tested the effectiveness of SEG-BERT with experiments on seven graph
instance benchmark datasets, and SEG-BERT can out-perform the comparison
methods on six out of them with significant performance advantages.
Related papers
- A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - OpenGraph: Towards Open Graph Foundation Models [20.401374302429627]
Graph Neural Networks (GNNs) have emerged as promising techniques for encoding structural information.
Key challenge remains: the difficulty of generalizing to unseen graph data with different properties.
We propose a novel graph foundation model, called OpenGraph, to address this challenge.
arXiv Detail & Related papers (2024-03-02T08:05:03Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Subgraph Networks Based Contrastive Learning [5.736011243152416]
Graph contrastive learning (GCL) can solve the problem of annotated data scarcity.
Most existing GCL methods focus on the design of graph augmentation strategies and mutual information estimation operations.
We propose a novel framework called subgraph network-based contrastive learning (SGNCL)
arXiv Detail & Related papers (2023-06-06T08:52:44Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - GRATIS: Deep Learning Graph Representation with Task-specific Topology
and Multi-dimensional Edge Features [27.84193444151138]
We propose the first general graph representation learning framework (called GRATIS)
It can generate a strong graph representation with a task-specific topology and task-specific multi-dimensional edge features from any arbitrary input.
Our framework is effective, robust and flexible, and is a plug-and-play module that can be combined with different backbones and Graph Neural Networks (GNNs)
arXiv Detail & Related papers (2022-11-19T18:42:55Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Two-view Graph Neural Networks for Knowledge Graph Completion [13.934907240846197]
We introduce a novel GNN-based knowledge graph embedding model, named WGE, to capture entity-focused graph structure and relation-focused graph structure.
WGE obtains state-of-the-art performances on three new and challenging benchmark datasets CoDEx for knowledge graph completion.
arXiv Detail & Related papers (2021-12-16T22:36:17Z) - Inverse Graph Identification: Can We Identify Node Labels Given Graph
Labels? [89.13567439679709]
Graph Identification (GI) has long been researched in graph learning and is essential in certain applications.
This paper defines a novel problem dubbed Inverse Graph Identification (IGI)
We propose a simple yet effective method that makes the node-level message passing process using Graph Attention Network (GAT) under the protocol of GI.
arXiv Detail & Related papers (2020-07-12T12:06:17Z) - Graph-Bert: Only Attention is Needed for Learning Graph Representations [22.031852733026]
The dominant graph neural networks (GNNs) over-rely on the graph links, causing serious performance problems.
In this paper, we will introduce a new graph neural network, namely GRAPH-BERT (Graph based BERT)
The experimental results have demonstrated that GRAPH-BERT can out-perform the existing GNNs in both the learning effectiveness and efficiency.
arXiv Detail & Related papers (2020-01-15T05:56:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.