Chinese Intermediate English Learners outdid ChatGPT in deep cohesion:
Evidence from English narrative writing
- URL: http://arxiv.org/abs/2303.11812v1
- Date: Tue, 21 Mar 2023 12:55:54 GMT
- Title: Chinese Intermediate English Learners outdid ChatGPT in deep cohesion:
Evidence from English narrative writing
- Authors: Tongquan Zhou, Siyi Cao, Siruo Zhou, Yao Zhang, Aijing He
- Abstract summary: This study compared the writing performance on a narrative topic by ChatGPT and Chinese intermediate English (CIE) learners.
Data were analyzed in terms of five discourse components using Coh-Metrix.
- Score: 5.747170211018015
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: ChatGPT is a publicly available chatbot that can quickly generate texts on
given topics, but it is unknown whether the chatbot is really superior to human
writers in all aspects of writing and whether its writing quality can be
prominently improved on the basis of updating commands. Consequently, this
study compared the writing performance on a narrative topic by ChatGPT and
Chinese intermediate English (CIE) learners so as to reveal the chatbot's
advantage and disadvantage in writing. The data were analyzed in terms of five
discourse components using Coh-Metrix (a special instrument for analyzing
language discourses), and the results revealed that ChatGPT performed better
than human writers in narrativity, word concreteness, and referential cohesion,
but worse in syntactic simplicity and deep cohesion in its initial version.
After more revision commands were updated, while the resulting version was
facilitated in syntactic simplicity, yet it is still lagged far behind CIE
learners' writing in deep cohesion. In addition, the correlation analysis of
the discourse components suggests that narrativity was correlated with
referential cohesion in both ChatGPT and human writers, but the correlations
varied within each group.
Related papers
- A Linguistic Comparison between Human and ChatGPT-Generated Conversations [9.022590646680095]
The research employs Linguistic Inquiry and Word Count analysis, comparing ChatGPT-generated conversations with human conversations.
Results show greater variability and authenticity in human dialogues, but ChatGPT excels in categories such as social processes, analytical style, cognition, attentional focus, and positive emotional tone.
arXiv Detail & Related papers (2024-01-29T21:43:27Z) - Emergent AI-Assisted Discourse: Case Study of a Second Language Writer
Authoring with ChatGPT [5.8131604120288385]
This study investigates the role of ChatGPT in facilitating academic writing, especially among language learners.
Using a case study approach, this study examines the experiences of Kailing, a doctoral student, who integrates ChatGPT throughout their academic writing process.
arXiv Detail & Related papers (2023-10-17T00:22:10Z) - Chatbot-supported Thesis Writing: An Autoethnographic Report [0.0]
ChatGPT might be applied to formats that require learners to generate text, such as bachelor theses or student research papers.
ChatGPT is to be valued as a beneficial tool in thesis writing.
However, writing a conclusive thesis still requires the learner's meaningful engagement.
arXiv Detail & Related papers (2023-10-14T09:09:26Z) - Playing with Words: Comparing the Vocabulary and Lexical Richness of
ChatGPT and Humans [3.0059120458540383]
generative language models such as ChatGPT have triggered a revolution that can transform how text is generated.
Will the use of tools such as ChatGPT increase or reduce the vocabulary used or the lexical richness?
This has implications for words, as those not included in AI-generated content will tend to be less and less popular and may eventually be lost.
arXiv Detail & Related papers (2023-08-14T21:19:44Z) - Is ChatGPT Involved in Texts? Measure the Polish Ratio to Detect
ChatGPT-Generated Text [48.36706154871577]
We introduce a novel dataset termed HPPT (ChatGPT-polished academic abstracts)
It diverges from extant corpora by comprising pairs of human-written and ChatGPT-polished abstracts instead of purely ChatGPT-generated texts.
We also propose the "Polish Ratio" method, an innovative measure of the degree of modification made by ChatGPT compared to the original human-written text.
arXiv Detail & Related papers (2023-07-21T06:38:37Z) - Uncovering the Potential of ChatGPT for Discourse Analysis in Dialogue:
An Empirical Study [51.079100495163736]
This paper systematically inspects ChatGPT's performance in two discourse analysis tasks: topic segmentation and discourse parsing.
ChatGPT demonstrates proficiency in identifying topic structures in general-domain conversations yet struggles considerably in specific-domain conversations.
Our deeper investigation indicates that ChatGPT can give more reasonable topic structures than human annotations but only linearly parses the hierarchical rhetorical structures.
arXiv Detail & Related papers (2023-05-15T07:14:41Z) - ChatGPT Evaluation on Sentence Level Relations: A Focus on Temporal,
Causal, and Discourse Relations [52.26802326949116]
We quantitatively evaluate the performance of ChatGPT, an interactive large language model, on inter-sentential relations.
ChatGPT exhibits exceptional proficiency in detecting and reasoning about causal relations.
It is capable of identifying the majority of discourse relations with existing explicit discourse connectives, but the implicit discourse relation remains a formidable challenge.
arXiv Detail & Related papers (2023-04-28T13:14:36Z) - AI, write an essay for me: A large-scale comparison of human-written
versus ChatGPT-generated essays [66.36541161082856]
ChatGPT and similar generative AI models have attracted hundreds of millions of users.
This study compares human-written versus ChatGPT-generated argumentative student essays.
arXiv Detail & Related papers (2023-04-24T12:58:28Z) - ChatGPT Beyond English: Towards a Comprehensive Evaluation of Large
Language Models in Multilingual Learning [70.57126720079971]
Large language models (LLMs) have emerged as the most important breakthroughs in natural language processing (NLP)
This paper evaluates ChatGPT on 7 different tasks, covering 37 diverse languages with high, medium, low, and extremely low resources.
Compared to the performance of previous models, our extensive experimental results demonstrate a worse performance of ChatGPT for different NLP tasks and languages.
arXiv Detail & Related papers (2023-04-12T05:08:52Z) - Unsupervised Summarization for Chat Logs with Topic-Oriented Ranking and
Context-Aware Auto-Encoders [59.038157066874255]
We propose a novel framework called RankAE to perform chat summarization without employing manually labeled data.
RankAE consists of a topic-oriented ranking strategy that selects topic utterances according to centrality and diversity simultaneously.
A denoising auto-encoder is designed to generate succinct but context-informative summaries based on the selected utterances.
arXiv Detail & Related papers (2020-12-14T07:31:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.