Assessing prompting frameworks for enhancing literature reviews among university students using ChatGPT
- URL: http://arxiv.org/abs/2509.01128v2
- Date: Mon, 08 Sep 2025 03:39:46 GMT
- Title: Assessing prompting frameworks for enhancing literature reviews among university students using ChatGPT
- Authors: Aminul Islam, Mukta Bansal, Lena Felix Stephanie, Poernomo Gunawan, Pui Tze Sian, Sabrina Luk, Eunice Tan, Hortense Le Ferrand,
- Abstract summary: Writing literature reviews is a common component of university curricula, yet it often poses challenges for students.<n>Since generative artificial intelligence (GenAI) tools have been made publicly accessible, students have been employing them for academic writing tasks.<n>This study explores how university students use one of the most popular GenAI tools, ChatGPT, to write literature reviews.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Writing literature reviews is a common component of university curricula, yet it often poses challenges for students. Since generative artificial intelligence (GenAI) tools have been made publicly accessible, students have been employing them for their academic writing tasks. However, there is limited evidence of structured training on how to effectively use these GenAI tools to support students in writing literature reviews. In this study, we explore how university students use one of the most popular GenAI tools, ChatGPT, to write literature reviews and how prompting frameworks can enhance their output. To this aim, prompts and literature reviews written by a group of university students were collected before and after they had been introduced to three prompting frameworks, namely CO-STAR, POSE, and Sandwich. The results indicate that after being exposed to these prompting frameworks, the students demonstrated improved prompting behaviour, resulting in more effective prompts and higher quality literature reviews. However, it was also found that the students did not fully utilise all the elements in the prompting frameworks, and aspects such as originality, critical analysis, and depth in their reviews remain areas for improvement. The study, therefore, raises important questions about the significance of utilising prompting frameworks in their entirety to maximise the quality of outcomes, as well as the extent of prior writing experience students should have before leveraging GenAI in the process of writing literature reviews. These findings are of interest for educators considering the integration of GenAI into academic writing tasks such as literature reviews or evaluating whether to permit students to use these tools.
Related papers
- Exposía: Academic Writing Assessment of Exposés and Peer Feedback [56.428320613219306]
We present Exposa, the first public dataset that connects writing and feedback assessment in higher education.<n>We use Exposa to benchmark state-of-the-art open-source large language models (LLMs) for two tasks: automated scoring of (1) the proposals and (2) the student reviews.
arXiv Detail & Related papers (2026-01-10T11:33:26Z) - Examining Student Interactions with a Pedagogical AI-Assistant for Essay Writing and their Impact on Students Writing Quality [4.112932467662682]
The dynamic nature of interactions between students and GenAI, as well as their relationship to writing quality, remains underexplored.<n>We evaluated a GenAI-driven essay-writing assistant (EWA) designed to support higher education students in argumentative writing.
arXiv Detail & Related papers (2025-12-09T13:34:33Z) - Do Students Write Better Post-AI Support? Effects of Generative AI Literacy and Chatbot Interaction Strategies on Multimodal Academic Writing [0.0]
Academic writing increasingly involves multimodal tasks requiring students to integrate visual information and textual arguments.<n>While generative AI (GenAI) tools, like ChatGPT, offer new pathways for supporting academic writing, little is known about how students' GenAI literacy influences their independent multimodal writing skills.<n>This study examined 79 higher education students' multimodal academic writing performance using a comparative research design.
arXiv Detail & Related papers (2025-07-06T14:01:06Z) - XtraGPT: Context-Aware and Controllable Academic Paper Revision [43.263488839387584]
We propose a human-AI collaboration framework for academic paper revision centered on criteria-guided intent alignment and context-aware modeling.<n>We instantiate the framework in XtraGPT, the first suite of open-source LLMs for context-aware, instruction-guided writing assistance.
arXiv Detail & Related papers (2025-05-16T15:02:19Z) - Modifying AI, Enhancing Essays: How Active Engagement with Generative AI Boosts Writing Quality [4.517077427559346]
Students are increasingly relying on Generative AI (GAI) to support their writing.<n>This study aimed to help teachers better assess and support student learning in GAI-assisted writing.
arXiv Detail & Related papers (2024-12-10T05:32:57Z) - How Novice Programmers Use and Experience ChatGPT when Solving Programming Exercises in an Introductory Course [0.0]
This research paper contributes to the computing education research community's understanding of Generative AI (GenAI) in the context of introductory programming.
This study is guided by the following research questions:.
What do students report on their use pattern of ChatGPT in the context of introductory programming exercises?
How do students perceive ChatGPT in the context of introductory programming exercises?
arXiv Detail & Related papers (2024-07-30T12:55:42Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [55.33653554387953]
Pattern Analysis and Machine Intelligence (PAMI) has led to numerous literature reviews aimed at collecting and fragmented information.<n>This paper presents a thorough analysis of these literature reviews within the PAMI field.<n>We try to address three core research questions: (1) What are the prevalent structural and statistical characteristics of PAMI literature reviews; (2) What strategies can researchers employ to efficiently navigate the growing corpus of reviews; and (3) What are the advantages and limitations of AI-generated reviews compared to human-authored ones.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - PaperCard for Reporting Machine Assistance in Academic Writing [48.33722012818687]
ChatGPT, a question-answering system released by OpenAI in November 2022, has demonstrated a range of capabilities that could be utilised in producing academic papers.
This raises critical questions surrounding the concept of authorship in academia.
We propose a framework we name "PaperCard", a documentation for human authors to transparently declare the use of AI in their writing process.
arXiv Detail & Related papers (2023-10-07T14:28:04Z) - PapagAI:Automated Feedback for Reflective Essays [48.4434976446053]
We present the first open-source automated feedback tool based on didactic theory and implemented as a hybrid AI system.
The main objective of our work is to enable better learning outcomes for students and to complement the teaching activities of lecturers.
arXiv Detail & Related papers (2023-07-10T11:05:51Z) - Exploring EFL students' prompt engineering in human-AI story writing: an
Activity Theory perspective [4.0109641418513355]
This study applies Activity Theory to investigate how English as a foreign language (EFL) students prompt generative artificial intelligence (AI) tools during short story writing.
The study collected and analyzed the students' generative-AI tools, short stories, and written reflections on their conditions or purposes for prompting.
arXiv Detail & Related papers (2023-06-01T14:52:28Z) - Exploring User Perspectives on ChatGPT: Applications, Perceptions, and
Implications for AI-Integrated Education [40.38809129759498]
ChatGPT is most commonly used in the domains of higher education, K-12 education, and practical skills training.
On one hand, some users view it as a transformative tool capable of amplifying student self-efficacy and learning motivation.
On the other hand, there is a degree of apprehension among concerned users.
arXiv Detail & Related papers (2023-05-22T15:13:14Z) - Perception, performance, and detectability of conversational artificial
intelligence across 32 university courses [15.642614735026106]
We compare the performance of ChatGPT against students on 32 university-level courses.
We find that ChatGPT's performance is comparable, if not superior, to that of students in many courses.
We find an emerging consensus among students to use the tool, and among educators to treat this as plagiarism.
arXiv Detail & Related papers (2023-05-07T10:37:51Z) - GPTScore: Evaluate as You Desire [40.111346987131974]
This paper proposes a novel evaluation framework, GPTScore, which utilizes the emergent abilities (e.g., zero-shot instruction) from generative pre-trained models to score generated texts.
Experimental results on four text generation tasks, 22 evaluation aspects, and corresponding 37 datasets demonstrate that GPTScore can effectively allow us to achieve what one desires to evaluate for texts simply by natural language instructions.
arXiv Detail & Related papers (2023-02-08T16:17:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.