Impact of Guidance and Interaction Strategies for LLM Use on Learner Performance and Perception
- URL: http://arxiv.org/abs/2310.13712v3
- Date: Tue, 20 Aug 2024 02:31:25 GMT
- Title: Impact of Guidance and Interaction Strategies for LLM Use on Learner Performance and Perception
- Authors: Harsh Kumar, Ilya Musabirov, Mohi Reza, Jiakai Shi, Xinyuan Wang, Joseph Jay Williams, Anastasia Kuzminykh, Michael Liut,
- Abstract summary: Large language models (LLMs) offer a promising avenue, with increasing research exploring their educational utility.
Our work highlights the role that teachers can play in shaping LLM-supported learning environments.
- Score: 19.335003380399527
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personalized chatbot-based teaching assistants can be crucial in addressing increasing classroom sizes, especially where direct teacher presence is limited. Large language models (LLMs) offer a promising avenue, with increasing research exploring their educational utility. However, the challenge lies not only in establishing the efficacy of LLMs but also in discerning the nuances of interaction between learners and these models, which impact learners' engagement and results. We conducted a formative study in an undergraduate computer science classroom (N=145) and a controlled experiment on Prolific (N=356) to explore the impact of four pedagogically informed guidance strategies on the learners' performance, confidence and trust in LLMs. Direct LLM answers marginally improved performance, while refining student solutions fostered trust. Structured guidance reduced random queries as well as instances of students copy-pasting assignment questions to the LLM. Our work highlights the role that teachers can play in shaping LLM-supported learning environments.
Related papers
- Exploring Knowledge Tracing in Tutor-Student Dialogues [53.52699766206808]
We present a first attempt at performing knowledge tracing (KT) in tutor-student dialogues.
We propose methods to identify the knowledge components/skills involved in each dialogue turn.
We then apply a range of KT methods on the resulting labeled data to track student knowledge levels over an entire dialogue.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - AI Meets the Classroom: When Does ChatGPT Harm Learning? [0.0]
We study how generative AI and specifically large language models (LLMs) impact learning in coding classes.
We show across three studies that LLM usage can have positive and negative effects on learning outcomes.
arXiv Detail & Related papers (2024-08-29T17:07:46Z) - Dr.Academy: A Benchmark for Evaluating Questioning Capability in Education for Large Language Models [30.759154473275043]
This study introduces a benchmark to evaluate the questioning capability in education as a teacher of large language models (LLMs)
We apply four metrics, including relevance, coverage, representativeness, and consistency, to evaluate the educational quality of LLMs' outputs.
Our results indicate that GPT-4 demonstrates significant potential in teaching general, humanities, and science courses; Claude2 appears more apt as an interdisciplinary teacher.
arXiv Detail & Related papers (2024-08-20T15:36:30Z) - I don't trust you (anymore)! -- The effect of students' LLM use on Lecturer-Student-Trust in Higher Education [0.0]
Large Language Models (LLMs) in platforms like Open AI's ChatGPT, has led to their rapid adoption among university students.
This study addresses the research question: How does the use of LLMs by students impact Informational and Procedural Justice, influencing Team Trust and Expected Team Performance?
Our findings indicate that lecturers are less concerned about the fairness of LLM use per se but are more focused on the transparency of student utilization.
arXiv Detail & Related papers (2024-06-21T05:35:57Z) - Supporting Self-Reflection at Scale with Large Language Models: Insights from Randomized Field Experiments in Classrooms [7.550701021850185]
We investigate the potential of Large Language Models (LLMs) to help students engage in post-lesson reflection.
We conducted two randomized field experiments in undergraduate computer science courses.
arXiv Detail & Related papers (2024-06-01T02:41:59Z) - Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing [56.75702900542643]
We introduce AlphaLLM for the self-improvements of Large Language Models.
It integrates Monte Carlo Tree Search (MCTS) with LLMs to establish a self-improving loop.
Our experimental results show that AlphaLLM significantly enhances the performance of LLMs without additional annotations.
arXiv Detail & Related papers (2024-04-18T15:21:34Z) - Rethinking the Roles of Large Language Models in Chinese Grammatical
Error Correction [62.409807640887834]
Chinese Grammatical Error Correction (CGEC) aims to correct all potential grammatical errors in the input sentences.
LLMs' performance as correctors on CGEC remains unsatisfactory due to its challenging task focus.
We rethink the roles of LLMs in the CGEC task so that they can be better utilized and explored in CGEC.
arXiv Detail & Related papers (2024-02-18T01:40:34Z) - Rethinking Machine Unlearning for Large Language Models [85.92660644100582]
We explore machine unlearning in the domain of large language models (LLMs)
This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities.
arXiv Detail & Related papers (2024-02-13T20:51:58Z) - Survey on Factuality in Large Language Models: Knowledge, Retrieval and
Domain-Specificity [61.54815512469125]
This survey addresses the crucial issue of factuality in Large Language Models (LLMs)
As LLMs find applications across diverse domains, the reliability and accuracy of their outputs become vital.
arXiv Detail & Related papers (2023-10-11T14:18:03Z) - Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation [109.8527403904657]
We show that large language models (LLMs) possess unwavering confidence in their knowledge and cannot handle the conflict between internal and external knowledge well.
Retrieval augmentation proves to be an effective approach in enhancing LLMs' awareness of knowledge boundaries.
We propose a simple method to dynamically utilize supporting documents with our judgement strategy.
arXiv Detail & Related papers (2023-07-20T16:46:10Z) - Learning from Mistakes via Cooperative Study Assistant for Large
Language Models [17.318591492264023]
Large language models (LLMs) have demonstrated their potential to refine their generation based on their own feedback.
We propose Study Assistant for Large LAnguage Model (SALAM), a novel framework with an auxiliary agent to assist the main LLM in learning from mistakes.
arXiv Detail & Related papers (2023-05-23T08:51:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.