Exploring the Limits of ChatGPT for Query or Aspect-based Text
Summarization
- URL: http://arxiv.org/abs/2302.08081v1
- Date: Thu, 16 Feb 2023 04:41:30 GMT
- Title: Exploring the Limits of ChatGPT for Query or Aspect-based Text
Summarization
- Authors: Xianjun Yang, Yan Li, Xinlu Zhang, Haifeng Chen, Wei Cheng
- Abstract summary: Large language models (LLMs) like GPT3 and ChatGPT have recently created significant interest in using these models for text summarization tasks.
Recent studies citegoyal2022news, zhang2023benchmarking have shown that LLMs-generated news summaries are already on par with humans.
Our experiments reveal that ChatGPT's performance is comparable to traditional fine-tuning methods in terms of Rouge scores.
- Score: 28.104696513516117
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Text summarization has been a crucial problem in natural language processing
(NLP) for several decades. It aims to condense lengthy documents into shorter
versions while retaining the most critical information. Various methods have
been proposed for text summarization, including extractive and abstractive
summarization. The emergence of large language models (LLMs) like GPT3 and
ChatGPT has recently created significant interest in using these models for
text summarization tasks. Recent studies \cite{goyal2022news,
zhang2023benchmarking} have shown that LLMs-generated news summaries are
already on par with humans. However, the performance of LLMs for more practical
applications like aspect or query-based summaries is underexplored. To fill
this gap, we conducted an evaluation of ChatGPT's performance on four widely
used benchmark datasets, encompassing diverse summaries from Reddit posts, news
articles, dialogue meetings, and stories. Our experiments reveal that ChatGPT's
performance is comparable to traditional fine-tuning methods in terms of Rouge
scores. Moreover, we highlight some unique differences between
ChatGPT-generated summaries and human references, providing valuable insights
into the superpower of ChatGPT for diverse text summarization tasks. Our
findings call for new directions in this area, and we plan to conduct further
research to systematically examine the characteristics of ChatGPT-generated
summaries through extensive human evaluation.
Related papers
- Chatbots Are Not Reliable Text Annotators [0.0]
ChatGPT is a closed-source product which has major drawbacks with regards to transparency, cost, and data protection.
Recent advances in open-source (OS) large language models (LLMs) offer alternatives which remedy these challenges.
arXiv Detail & Related papers (2023-11-09T22:28:14Z) - Is ChatGPT Involved in Texts? Measure the Polish Ratio to Detect
ChatGPT-Generated Text [48.36706154871577]
We introduce a novel dataset termed HPPT (ChatGPT-polished academic abstracts)
It diverges from extant corpora by comprising pairs of human-written and ChatGPT-polished abstracts instead of purely ChatGPT-generated texts.
We also propose the "Polish Ratio" method, an innovative measure of the degree of modification made by ChatGPT compared to the original human-written text.
arXiv Detail & Related papers (2023-07-21T06:38:37Z) - Hybrid Long Document Summarization using C2F-FAR and ChatGPT: A
Practical Study [1.933681537640272]
ChatGPT is the latest breakthrough in the field of large language models (LLMs)
We propose a hybrid extraction and summarization pipeline for long documents such as business articles and books.
Our results show that the use of ChatGPT is a very promising but not yet mature approach for summarizing long documents.
arXiv Detail & Related papers (2023-06-01T21:58:33Z) - A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark
Datasets [19.521390684403293]
We present a thorough evaluation of ChatGPT's performance on diverse academic datasets.
Specifically, we evaluate ChatGPT across 140 tasks and analyze 255K responses it generates in these datasets.
arXiv Detail & Related papers (2023-05-29T12:37:21Z) - ChatGPT Beyond English: Towards a Comprehensive Evaluation of Large
Language Models in Multilingual Learning [70.57126720079971]
Large language models (LLMs) have emerged as the most important breakthroughs in natural language processing (NLP)
This paper evaluates ChatGPT on 7 different tasks, covering 37 diverse languages with high, medium, low, and extremely low resources.
Compared to the performance of previous models, our extensive experimental results demonstrate a worse performance of ChatGPT for different NLP tasks and languages.
arXiv Detail & Related papers (2023-04-12T05:08:52Z) - Extractive Summarization via ChatGPT for Faithful Summary Generation [12.966825834765814]
This paper presents a thorough evaluation of ChatGPT's performance on extractive summarization.
We find that ChatGPT exhibits inferior extractive summarization performance in terms of ROUGE scores compared to existing supervised systems.
Applying an extract-then-generate pipeline with ChatGPT yields significant performance improvements over abstractive baselines in terms of summary faithfulness.
arXiv Detail & Related papers (2023-04-09T08:26:04Z) - Comparing Abstractive Summaries Generated by ChatGPT to Real Summaries
Through Blinded Reviewers and Text Classification Algorithms [0.8339831319589133]
ChatGPT, developed by OpenAI, is a recent addition to the family of language models.
We evaluate the performance of ChatGPT on Abstractive Summarization by the means of automated metrics and blinded human reviewers.
arXiv Detail & Related papers (2023-03-30T18:28:33Z) - Is ChatGPT A Good Keyphrase Generator? A Preliminary Study [51.863368917344864]
ChatGPT has recently garnered significant attention from the computational linguistics community.
We evaluate its performance in various aspects, including keyphrase generation prompts, keyphrase generation diversity, and long document understanding.
We find that ChatGPT performs exceptionally well on all six candidate prompts, with minor performance differences observed across the datasets.
arXiv Detail & Related papers (2023-03-23T02:50:38Z) - A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on
Reasoning, Hallucination, and Interactivity [79.12003701981092]
We carry out an extensive technical evaluation of ChatGPT using 23 data sets covering 8 different common NLP application tasks.
We evaluate the multitask, multilingual and multi-modal aspects of ChatGPT based on these data sets and a newly designed multimodal dataset.
ChatGPT is 63.41% accurate on average in 10 different reasoning categories under logical reasoning, non-textual reasoning, and commonsense reasoning.
arXiv Detail & Related papers (2023-02-08T12:35:34Z) - Is ChatGPT a General-Purpose Natural Language Processing Task Solver? [113.22611481694825]
Large language models (LLMs) have demonstrated the ability to perform a variety of natural language processing (NLP) tasks zero-shot.
Recently, the debut of ChatGPT has drawn a great deal of attention from the natural language processing (NLP) community.
It is not yet known whether ChatGPT can serve as a generalist model that can perform many NLP tasks zero-shot.
arXiv Detail & Related papers (2023-02-08T09:44:51Z) - From Standard Summarization to New Tasks and Beyond: Summarization with
Manifold Information [77.89755281215079]
Text summarization is the research area aiming at creating a short and condensed version of the original document.
In real-world applications, most of the data is not in a plain text format.
This paper focuses on the survey of these new summarization tasks and approaches in the real-world application.
arXiv Detail & Related papers (2020-05-10T14:59:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.