ChatGraph: Interpretable Text Classification by Converting ChatGPT
Knowledge to Graphs
- URL: http://arxiv.org/abs/2305.03513v2
- Date: Tue, 19 Sep 2023 14:26:17 GMT
- Title: ChatGraph: Interpretable Text Classification by Converting ChatGPT
Knowledge to Graphs
- Authors: Yucheng Shi, Hehuan Ma, Wenliang Zhong, Qiaoyu Tan, Gengchen Mai,
Xiang Li, Tianming Liu, Junzhou Huang
- Abstract summary: ChatGPT has shown superior performance in various natural language processing (NLP) tasks.
We propose a novel framework that leverages the power of ChatGPT for specific tasks, such as text classification.
Our method provides a more transparent decision-making process compared with previous text classification methods.
- Score: 54.48467003509595
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: ChatGPT, as a recently launched large language model (LLM), has shown
superior performance in various natural language processing (NLP) tasks.
However, two major limitations hinder its potential applications: (1) the
inflexibility of finetuning on downstream tasks and (2) the lack of
interpretability in the decision-making process. To tackle these limitations,
we propose a novel framework that leverages the power of ChatGPT for specific
tasks, such as text classification, while improving its interpretability. The
proposed framework conducts a knowledge graph extraction task to extract
refined and structural knowledge from the raw data using ChatGPT. The rich
knowledge is then converted into a graph, which is further used to train an
interpretable linear classifier to make predictions. To evaluate the
effectiveness of our proposed method, we conduct experiments on four datasets.
The result shows that our method can significantly improve the performance
compared to directly utilizing ChatGPT for text classification tasks. And our
method provides a more transparent decision-making process compared with
previous text classification methods.
Related papers
- Text classification optimization algorithm based on graph neural network [0.36651088217486427]
This paper introduces a text classification optimization algorithm utilizing graph neural networks.
By introducing adaptive graph construction strategy and efficient graph convolution operation, the accuracy and efficiency of text classification are effectively improved.
arXiv Detail & Related papers (2024-08-09T23:25:37Z) - Pre-Training and Prompting for Few-Shot Node Classification on Text-Attributed Graphs [35.44563283531432]
Text-attributed graph (TAG) is one kind of important real-world graph-structured data with each node associated with raw texts.
For TAGs, traditional few-shot node classification methods directly conduct training on the pre-processed node features.
We propose P2TAG, a framework designed for few-shot node classification on TAGs with graph pre-training and prompting.
arXiv Detail & Related papers (2024-07-22T07:24:21Z) - FPT: Feature Prompt Tuning for Few-shot Readability Assessment [14.058586527866426]
We propose a novel prompt-based tuning framework that incorporates rich linguistic knowledge.
Specifically, we extract linguistic features from the text and embed them into trainable soft prompts.
Our proposed method establishes a new architecture for prompt tuning that sheds light on how linguistic features can be easily adapted to linguistic-related tasks.
arXiv Detail & Related papers (2024-04-03T14:39:47Z) - Pushing the Limits of ChatGPT on NLP Tasks [79.17291002710517]
Despite the success of ChatGPT, its performances on most NLP tasks are still well below the supervised baselines.
In this work, we looked into the causes, and discovered that its subpar performance was caused by the following factors.
We propose a collection of general modules to address these issues, in an attempt to push the limits of ChatGPT on NLP tasks.
arXiv Detail & Related papers (2023-06-16T09:40:05Z) - Turning a CLIP Model into a Scene Text Detector [56.86413150091367]
Recently, pretraining approaches based on vision language models have made effective progresses in the field of text detection.
This paper proposes a new method, termed TCM, focusing on Turning the CLIP Model directly for text detection without pretraining process.
arXiv Detail & Related papers (2023-02-28T06:06:12Z) - Story Point Effort Estimation by Text Level Graph Neural Network [2.652428960991066]
Graph Neural Network is a new approach that has been applied in Natural Language Processing for text classification.
We show the potential and possible challenges of Graph Neural Network text classification in story point level estimation.
arXiv Detail & Related papers (2022-03-06T22:15:03Z) - DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting [91.56988987393483]
We present a new framework for dense prediction by implicitly and explicitly leveraging the pre-trained knowledge from CLIP.
Specifically, we convert the original image-text matching problem in CLIP to a pixel-text matching problem and use the pixel-text score maps to guide the learning of dense prediction models.
Our method is model-agnostic, which can be applied to arbitrary dense prediction systems and various pre-trained visual backbones.
arXiv Detail & Related papers (2021-12-02T18:59:32Z) - Be More with Less: Hypergraph Attention Networks for Inductive Text
Classification [56.98218530073927]
Graph neural networks (GNNs) have received increasing attention in the research community and demonstrated their promising results on this canonical task.
Despite the success, their performance could be largely jeopardized in practice since they are unable to capture high-order interaction between words.
We propose a principled model -- hypergraph attention networks (HyperGAT) which can obtain more expressive power with less computational consumption for text representation learning.
arXiv Detail & Related papers (2020-11-01T00:21:59Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.