Word Grounded Graph Convolutional Network
- URL: http://arxiv.org/abs/2305.06434v1
- Date: Wed, 10 May 2023 19:56:55 GMT
- Title: Word Grounded Graph Convolutional Network
- Authors: Zhibin Lu, Qianqian Xie, Benyou Wang, Jian-yun Nie
- Abstract summary: Graph Convolutional Networks (GCNs) have shown strong performance in learning text representations for various tasks such as text classification.
We propose to transform the document graph into a word graph, to decouple data samples and a GCN model by using a document-independent graph.
The proposed Word-level Graph (WGraph) can not only implicitly learning word presentation with commonly-used word co-occurrences in corpora, but also incorporate extra global semantic dependency.
- Score: 24.6338889954789
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Convolutional Networks (GCNs) have shown strong performance in learning
text representations for various tasks such as text classification, due to its
expressive power in modeling graph structure data (e.g., a literature citation
network). Most existing GCNs are limited to deal with documents included in a
pre-defined graph, i.e., it cannot be generalized to out-of-graph documents. To
address this issue, we propose to transform the document graph into a word
graph, to decouple data samples (i.e., documents in training and test sets) and
a GCN model by using a document-independent graph. Such word-level GCN could
therefore naturally inference out-of-graph documents in an inductive way. The
proposed Word-level Graph (WGraph) can not only implicitly learning word
presentation with commonly-used word co-occurrences in corpora, but also
incorporate extra global semantic dependency derived from inter-document
relationships (e.g., literature citations). An inductive Word-grounded Graph
Convolutional Network (WGCN) is proposed to learn word and document
representations based on WGraph in a supervised manner. Experiments on text
classification with and without citation networks evidence that the proposed
WGCN model outperforms existing methods in terms of effectiveness and
efficiency.
Related papers
- Graph Neural Networks on Discriminative Graphs of Words [19.817473565906777]
In this work, we explore a new Discriminative Graph of Words Graph Neural Network (DGoW-GNN) approach to classify text.
We propose a new model for the graph-based classification of text, which combines a GNN and a sequence model.
We evaluate our approach on seven benchmark datasets and find that it is outperformed by several state-of-the-art baseline models.
arXiv Detail & Related papers (2024-10-27T15:14:06Z) - Scientific Paper Extractive Summarization Enhanced by Citation Graphs [50.19266650000948]
We focus on leveraging citation graphs to improve scientific paper extractive summarization under different settings.
Preliminary results demonstrate that citation graph is helpful even in a simple unsupervised framework.
Motivated by this, we propose a Graph-based Supervised Summarization model (GSS) to achieve more accurate results on the task when large-scale labeled data are available.
arXiv Detail & Related papers (2022-12-08T11:53:12Z) - Semi-Supervised Hierarchical Graph Classification [54.25165160435073]
We study the node classification problem in the hierarchical graph where a 'node' is a graph instance.
We propose the Hierarchical Graph Mutual Information (HGMI) and present a way to compute HGMI with theoretical guarantee.
We demonstrate the effectiveness of this hierarchical graph modeling and the proposed SEAL-CI method on text and social network data.
arXiv Detail & Related papers (2022-06-11T04:05:29Z) - FactGraph: Evaluating Factuality in Summarization with Semantic Graph
Representations [114.94628499698096]
We propose FactGraph, a method that decomposes the document and the summary into structured meaning representations (MRs)
MRs describe core semantic concepts and their relations, aggregating the main content in both document and summary in a canonical form, and reducing data sparsity.
Experiments on different benchmarks for evaluating factuality show that FactGraph outperforms previous approaches by up to 15%.
arXiv Detail & Related papers (2022-04-13T16:45:33Z) - ME-GCN: Multi-dimensional Edge-Embedded Graph Convolutional Networks for
Semi-supervised Text Classification [6.196387205547024]
This paper introduces the ME-GCN (Multi-dimensional Edge-enhanced Graph Convolutional Networks) for semi-supervised text classification.
Our proposed model has significantly outperformed the state-of-the-art methods across eight benchmark datasets.
arXiv Detail & Related papers (2022-04-10T07:05:12Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Sparse Structure Learning via Graph Neural Networks for Inductive
Document Classification [2.064612766965483]
We propose a novel GNN-based sparse structure learning model for inductive document classification.
Our model collects a set of trainable edges connecting disjoint words between sentences and employs structure learning to sparsely select edges with dynamic contextual dependencies.
Experiments on several real-world datasets demonstrate that the proposed model outperforms most state-of-the-art results.
arXiv Detail & Related papers (2021-12-13T02:36:04Z) - Learning the Implicit Semantic Representation on Graph-Structured Data [57.670106959061634]
Existing representation learning methods in graph convolutional networks are mainly designed by describing the neighborhood of each node as a perceptual whole.
We propose a Semantic Graph Convolutional Networks (SGCN) that explores the implicit semantics by learning latent semantic-paths in graphs.
arXiv Detail & Related papers (2021-01-16T16:18:43Z) - Neural Topic Modeling by Incorporating Document Relationship Graph [18.692100955163713]
Graph Topic Model (GTM) is a GNN based neural topic model that represents a corpus as a document relationship graph.
Documents and words in the corpus become nodes in the graph and are connected based on document-word co-occurrences.
arXiv Detail & Related papers (2020-09-29T12:45:55Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Every Document Owns Its Structure: Inductive Text Classification via
Graph Neural Networks [22.91359631452695]
We propose TextING for inductive text classification via Graph Neural Networks (GNN)
We first build individual graphs for each document and then use GNN to learn the fine-grained word representations based on their local structures.
Our method outperforms state-of-the-art text classification methods.
arXiv Detail & Related papers (2020-04-22T07:23:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.