Impact of Guidance and Interaction Strategies for LLM Use on Learner Performance and Perception
- URL: http://arxiv.org/abs/2310.13712v3
- Date: Tue, 20 Aug 2024 02:31:25 GMT
- Title: Impact of Guidance and Interaction Strategies for LLM Use on Learner Performance and Perception
- Authors: Harsh Kumar, Ilya Musabirov, Mohi Reza, Jiakai Shi, Xinyuan Wang, Joseph Jay Williams, Anastasia Kuzminykh, Michael Liut,
- Abstract summary: Large language models (LLMs) offer a promising avenue, with increasing research exploring their educational utility.
Our work highlights the role that teachers can play in shaping LLM-supported learning environments.
- Score: 19.335003380399527
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personalized chatbot-based teaching assistants can be crucial in addressing increasing classroom sizes, especially where direct teacher presence is limited. Large language models (LLMs) offer a promising avenue, with increasing research exploring their educational utility. However, the challenge lies not only in establishing the efficacy of LLMs but also in discerning the nuances of interaction between learners and these models, which impact learners' engagement and results. We conducted a formative study in an undergraduate computer science classroom (N=145) and a controlled experiment on Prolific (N=356) to explore the impact of four pedagogically informed guidance strategies on the learners' performance, confidence and trust in LLMs. Direct LLM answers marginally improved performance, while refining student solutions fostered trust. Structured guidance reduced random queries as well as instances of students copy-pasting assignment questions to the LLM. Our work highlights the role that teachers can play in shaping LLM-supported learning environments.
Related papers
- From Selection to Generation: A Survey of LLM-based Active Learning [153.8110509961261]
Large Language Models (LLMs) have been employed for generating entirely new data instances and providing more cost-effective annotations.
This survey aims to serve as an up-to-date resource for researchers and practitioners seeking to gain an intuitive understanding of LLM-based AL techniques.
arXiv Detail & Related papers (2025-02-17T12:58:17Z) - Position: LLMs Can be Good Tutors in Foreign Language Education [87.88557755407815]
We argue that large language models (LLMs) have the potential to serve as effective tutors in foreign language education (FLE)
Specifically, LLMs can play three critical roles: (1) as data enhancers, improving the creation of learning materials or serving as student simulations; (2) as task predictors, serving as learner assessment or optimizing learning pathway; and (3) as agents, enabling personalized and inclusive education.
arXiv Detail & Related papers (2025-02-08T06:48:49Z) - Exploring Knowledge Tracing in Tutor-Student Dialogues using LLMs [49.18567856499736]
We investigate whether large language models (LLMs) can be supportive of open-ended dialogue tutoring.
We apply a range of knowledge tracing (KT) methods on the resulting labeled data to track student knowledge levels over an entire dialogue.
We conduct experiments on two tutoring dialogue datasets, and show that a novel yet simple LLM-based method, LLMKT, significantly outperforms existing KT methods in predicting student response correctness in dialogues.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - AI Meets the Classroom: When Does ChatGPT Harm Learning? [0.0]
We study how generative AI and specifically large language models (LLMs) impact learning in coding classes.
We show across three studies that LLM usage can have positive and negative effects on learning outcomes.
arXiv Detail & Related papers (2024-08-29T17:07:46Z) - I don't trust you (anymore)! -- The effect of students' LLM use on Lecturer-Student-Trust in Higher Education [0.0]
Large Language Models (LLMs) in platforms like Open AI's ChatGPT, has led to their rapid adoption among university students.
This study addresses the research question: How does the use of LLMs by students impact Informational and Procedural Justice, influencing Team Trust and Expected Team Performance?
Our findings indicate that lecturers are less concerned about the fairness of LLM use per se but are more focused on the transparency of student utilization.
arXiv Detail & Related papers (2024-06-21T05:35:57Z) - Supporting Self-Reflection at Scale with Large Language Models: Insights from Randomized Field Experiments in Classrooms [7.550701021850185]
We investigate the potential of Large Language Models (LLMs) to help students engage in post-lesson reflection.
We conducted two randomized field experiments in undergraduate computer science courses.
arXiv Detail & Related papers (2024-06-01T02:41:59Z) - Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing [56.75702900542643]
We introduce AlphaLLM for the self-improvements of Large Language Models.
It integrates Monte Carlo Tree Search (MCTS) with LLMs to establish a self-improving loop.
Our experimental results show that AlphaLLM significantly enhances the performance of LLMs without additional annotations.
arXiv Detail & Related papers (2024-04-18T15:21:34Z) - Rethinking Machine Unlearning for Large Language Models [85.92660644100582]
We explore machine unlearning in the domain of large language models (LLMs)
This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities.
arXiv Detail & Related papers (2024-02-13T20:51:58Z) - Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation [109.8527403904657]
We show that large language models (LLMs) possess unwavering confidence in their knowledge and cannot handle the conflict between internal and external knowledge well.
Retrieval augmentation proves to be an effective approach in enhancing LLMs' awareness of knowledge boundaries.
We propose a simple method to dynamically utilize supporting documents with our judgement strategy.
arXiv Detail & Related papers (2023-07-20T16:46:10Z) - Learning from Mistakes via Cooperative Study Assistant for Large
Language Models [17.318591492264023]
Large language models (LLMs) have demonstrated their potential to refine their generation based on their own feedback.
We propose Study Assistant for Large LAnguage Model (SALAM), a novel framework with an auxiliary agent to assist the main LLM in learning from mistakes.
arXiv Detail & Related papers (2023-05-23T08:51:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.