TENT: Text Classification Based on ENcoding Tree Learning
- URL: http://arxiv.org/abs/2110.02047v1
- Date: Tue, 5 Oct 2021 13:55:47 GMT
- Title: TENT: Text Classification Based on ENcoding Tree Learning
- Authors: Chong Zhang, Junran Wu, He Zhu, Ke Xu
- Abstract summary: We propose TENT to obtain better text classification performance and reduce the reliance on computing power.
Specifically, we first establish a dependency analysis graph for each text and then convert each graph into its corresponding encoding tree.
Experimental results show that our method outperforms other baselines on several datasets.
- Score: 9.927112304745542
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Text classification is a primary task in natural language processing (NLP).
Recently, graph neural networks (GNNs) have developed rapidly and been applied
to text classification tasks. Although more complex models tend to achieve
better performance, research highly depends on the computing power of the
device used. In this article, we propose TENT (https://github.com/Daisean/TENT)
to obtain better text classification performance and reduce the reliance on
computing power. Specifically, we first establish a dependency analysis graph
for each text and then convert each graph into its corresponding encoding tree.
The representation of the entire graph is obtained by updating the
representation of the non-leaf nodes in the encoding tree. Experimental results
show that our method outperforms other baselines on several datasets while
having a simple structure and few parameters.
Related papers
- Text classification optimization algorithm based on graph neural network [0.36651088217486427]
This paper introduces a text classification optimization algorithm utilizing graph neural networks.
By introducing adaptive graph construction strategy and efficient graph convolution operation, the accuracy and efficiency of text classification are effectively improved.
arXiv Detail & Related papers (2024-08-09T23:25:37Z) - Graph Neural Networks for Contextual ASR with the Tree-Constrained
Pointer Generator [9.053645441056256]
This paper proposes an innovative method for achieving end-to-end contextual ASR using graph neural network (GNN) encodings.
GNN encodings facilitate lookahead for future word pieces in the process of ASR decoding at each tree node.
The performance of the systems was evaluated using the Librispeech and AMI corpus, following the visual-grounded contextual ASR pipeline.
arXiv Detail & Related papers (2023-05-30T08:20:58Z) - A semantic hierarchical graph neural network for text classification [1.439766998338892]
We propose a new hierarchical graph neural network (HieGNN) which extracts corresponding information from word-level, sentence-level and document-level respectively.
Experimental results on several benchmark datasets achieve better or similar results compared to several baseline methods.
arXiv Detail & Related papers (2022-09-15T03:59:31Z) - Incorporating Constituent Syntax for Coreference Resolution [50.71868417008133]
We propose a graph-based method to incorporate constituent syntactic structures.
We also explore to utilise higher-order neighbourhood information to encode rich structures in constituent trees.
Experiments on the English and Chinese portions of OntoNotes 5.0 benchmark show that our proposed model either beats a strong baseline or achieves new state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T07:40:42Z) - TextRGNN: Residual Graph Neural Networks for Text Classification [13.912147013558846]
TextRGNN is an improved GNN structure that introduces residual connection to deepen the convolution network depth.
Our structure can obtain a wider node receptive field and effectively suppress the over-smoothing of node features.
It can significantly improve the classification accuracy whether in corpus level or text level, and achieve SOTA performance on a wide range of text classification datasets.
arXiv Detail & Related papers (2021-12-30T13:48:58Z) - Hierarchical Heterogeneous Graph Representation Learning for Short Text
Classification [60.233529926965836]
We propose a new method called SHINE, which is based on graph neural network (GNN) for short text classification.
First, we model the short text dataset as a hierarchical heterogeneous graph consisting of word-level component graphs.
Then, we dynamically learn a short document graph that facilitates effective label propagation among similar short texts.
arXiv Detail & Related papers (2021-10-30T05:33:05Z) - Graph Neural Networks for Natural Language Processing: A Survey [64.36633422999905]
We present a comprehensive overview onGraph Neural Networks (GNNs) for Natural Language Processing.
We propose a new taxonomy of GNNs for NLP, which organizes existing research of GNNs for NLP along three axes: graph construction,graph representation learning, and graph based encoder-decoder models.
arXiv Detail & Related papers (2021-06-10T23:59:26Z) - GraphFormers: GNN-nested Transformers for Representation Learning on
Textual Graph [53.70520466556453]
We propose GraphFormers, where layerwise GNN components are nested alongside the transformer blocks of language models.
With the proposed architecture, the text encoding and the graph aggregation are fused into an iterative workflow.
In addition, a progressive learning strategy is introduced, where the model is successively trained on manipulated data and original data to reinforce its capability of integrating information on graph.
arXiv Detail & Related papers (2021-05-06T12:20:41Z) - Be More with Less: Hypergraph Attention Networks for Inductive Text
Classification [56.98218530073927]
Graph neural networks (GNNs) have received increasing attention in the research community and demonstrated their promising results on this canonical task.
Despite the success, their performance could be largely jeopardized in practice since they are unable to capture high-order interaction between words.
We propose a principled model -- hypergraph attention networks (HyperGAT) which can obtain more expressive power with less computational consumption for text representation learning.
arXiv Detail & Related papers (2020-11-01T00:21:59Z) - Every Document Owns Its Structure: Inductive Text Classification via
Graph Neural Networks [22.91359631452695]
We propose TextING for inductive text classification via Graph Neural Networks (GNN)
We first build individual graphs for each document and then use GNN to learn the fine-grained word representations based on their local structures.
Our method outperforms state-of-the-art text classification methods.
arXiv Detail & Related papers (2020-04-22T07:23:47Z) - Improved Code Summarization via a Graph Neural Network [96.03715569092523]
In general, source code summarization techniques use the source code as input and outputs a natural language description.
We present an approach that uses a graph-based neural architecture that better matches the default structure of the AST to generate these summaries.
arXiv Detail & Related papers (2020-04-06T17:36:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.