Modifying AI, Enhancing Essays: How Active Engagement with Generative AI Boosts Writing Quality
- URL: http://arxiv.org/abs/2412.07200v1
- Date: Tue, 10 Dec 2024 05:32:57 GMT
- Title: Modifying AI, Enhancing Essays: How Active Engagement with Generative AI Boosts Writing Quality
- Authors: Kaixun Yang, Mladen Raković, Zhiping Liang, Lixiang Yan, Zijie Zeng, Yizhou Fan, Dragan Gašević, Guanliang Chen,
- Abstract summary: Students are increasingly relying on Generative AI (GAI) to support their writing.
This study aimed to help teachers better assess and support student learning in GAI-assisted writing.
- Score: 4.517077427559346
- License:
- Abstract: Students are increasingly relying on Generative AI (GAI) to support their writing-a key pedagogical practice in education. In GAI-assisted writing, students can delegate core cognitive tasks (e.g., generating ideas and turning them into sentences) to GAI while still producing high-quality essays. This creates new challenges for teachers in assessing and supporting student learning, as they often lack insight into whether students are engaging in meaningful cognitive processes during writing or how much of the essay's quality can be attributed to those processes. This study aimed to help teachers better assess and support student learning in GAI-assisted writing by examining how different writing behaviors, especially those indicative of meaningful learning versus those that are not, impact essay quality. Using a dataset of 1,445 GAI-assisted writing sessions, we applied the cutting-edge method, X-Learner, to quantify the causal impact of three GAI-assisted writing behavioral patterns (i.e., seeking suggestions but not accepting them, seeking suggestions and accepting them as they are, and seeking suggestions and accepting them with modification) on four measures of essay quality (i.e., lexical sophistication, syntactic complexity, text cohesion, and linguistic bias). Our analysis showed that writers who frequently modified GAI-generated text-suggesting active engagement in higher-order cognitive processes-consistently improved the quality of their essays in terms of lexical sophistication, syntactic complexity, and text cohesion. In contrast, those who often accepted GAI-generated text without changes, primarily engaging in lower-order processes, saw a decrease in essay quality. Additionally, while human writers tend to introduce linguistic bias when writing independently, incorporating GAI-generated text-even without modification-can help mitigate this bias.
Related papers
- "It was 80% me, 20% AI": Seeking Authenticity in Co-Writing with Large Language Models [97.22914355737676]
We examine whether and how writers want to preserve their authentic voice when co-writing with AI tools.
Our findings illuminate conceptions of authenticity in human-AI co-creation.
Readers' responses showed less concern about human-AI co-writing.
arXiv Detail & Related papers (2024-11-20T04:42:32Z) - How Does the Disclosure of AI Assistance Affect the Perceptions of Writing? [29.068596156140913]
We study whether and how the disclosure of the level and type of AI assistance in the writing process would affect people's perceptions of the writing.
Our results suggest that disclosing the AI assistance in the writing process, especially if AI has provided assistance in generating new content, decreases the average quality ratings.
arXiv Detail & Related papers (2024-10-06T16:45:33Z) - Inclusivity in Large Language Models: Personality Traits and Gender Bias in Scientific Abstracts [49.97673761305336]
We evaluate three large language models (LLMs) for their alignment with human narrative styles and potential gender biases.
Our findings indicate that, while these models generally produce text closely resembling human authored content, variations in stylistic features suggest significant gender biases.
arXiv Detail & Related papers (2024-06-27T19:26:11Z) - A School Student Essay Corpus for Analyzing Interactions of Argumentative Structure and Quality [12.187586364960758]
We present a German corpus of 1,320 essays from school students of two age groups.
Each essay has been manually annotated for argumentative structure and quality on multiple levels of granularity.
We propose baseline approaches to argument mining and essay scoring, and we analyze interactions between both tasks.
arXiv Detail & Related papers (2024-04-03T07:31:53Z) - Automatic and Human-AI Interactive Text Generation [27.05024520190722]
This tutorial aims to provide an overview of the state-of-the-art natural language generation research.
Text-to-text generation tasks are more constrained in terms of semantic consistency and targeted language styles.
arXiv Detail & Related papers (2023-10-05T20:26:15Z) - Automated Essay Scoring in Argumentative Writing: DeBERTeachingAssistant [0.0]
We present a transformer-based architecture capable of achieving above-human accuracy in annotating argumentative writing discourse elements.
We expand on planned future work investigating the explainability of our model so that actionable feedback can be offered to the student.
arXiv Detail & Related papers (2023-07-09T23:02:19Z) - To Revise or Not to Revise: Learning to Detect Improvable Claims for
Argumentative Writing Support [20.905660642919052]
We explore the main challenges to identifying argumentative claims in need of specific revisions.
We propose a new sampling strategy based on revision distance.
We provide evidence that using contextual information and domain knowledge can further improve prediction results.
arXiv Detail & Related papers (2023-05-26T10:19:54Z) - AI, write an essay for me: A large-scale comparison of human-written
versus ChatGPT-generated essays [66.36541161082856]
ChatGPT and similar generative AI models have attracted hundreds of millions of users.
This study compares human-written versus ChatGPT-generated argumentative student essays.
arXiv Detail & Related papers (2023-04-24T12:58:28Z) - A Survey on Retrieval-Augmented Text Generation [53.04991859796971]
Retrieval-augmented text generation has remarkable advantages and has achieved state-of-the-art performance in many NLP tasks.
It firstly highlights the generic paradigm of retrieval-augmented generation, and then it reviews notable approaches according to different tasks.
arXiv Detail & Related papers (2022-02-02T16:18:41Z) - Compression, Transduction, and Creation: A Unified Framework for
Evaluating Natural Language Generation [85.32991360774447]
Natural language generation (NLG) spans a broad range of tasks, each of which serves for specific objectives.
We propose a unifying perspective based on the nature of information change in NLG tasks.
We develop a family of interpretable metrics that are suitable for evaluating key aspects of different NLG tasks.
arXiv Detail & Related papers (2021-09-14T01:00:42Z) - TextGAIL: Generative Adversarial Imitation Learning for Text Generation [68.3579946817937]
We propose a generative adversarial imitation learning framework for text generation that uses large pre-trained language models to provide more reliable reward guidance.
Our approach uses contrastive discriminator, and proximal policy optimization (PPO) to stabilize and improve text generation performance.
arXiv Detail & Related papers (2020-04-07T00:24:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.