Oversight in Action: Experiences with Instructor-Moderated LLM Responses in an Online Discussion Forum
- URL: http://arxiv.org/abs/2412.09048v1
- Date: Thu, 12 Dec 2024 08:17:33 GMT
- Title: Oversight in Action: Experiences with Instructor-Moderated LLM Responses in an Online Discussion Forum
- Authors: Shuying Qiao, Paul Denny, Nasser Giacaman,
- Abstract summary: This paper presents the design, deployment, and evaluation of a bot' module that is controlled by the instructor.
The bot generates draft responses to student questions, which are reviewed, modified, and approved before release.
We report our experiences using this tool in a 12-week second-year software engineering course on object-oriented programming.
- Score: 2.86800540498016
- License:
- Abstract: The integration of large language models (LLMs) into computing education offers many potential benefits to student learning, and several novel pedagogical approaches have been reported in the literature. However LLMs also present challenges, one of the most commonly cited being that of student over-reliance. This challenge is compounded by the fact that LLMs are always available to provide instant help and solutions to students, which can undermine their ability to independently solve problems and diagnose and resolve errors. Providing instructor oversight of LLM-generated content can mitigate this problem, however it is often not practical in real-time learning contexts. Online class discussion forums, which are widely used in computing education, present an opportunity for exploring instructor oversight because they operate asynchronously. Unlike real-time interactions, the discussion forum format aligns with the expectation that responses may take time, making oversight not only feasible but also pedagogically appropriate. In this practitioner paper, we present the design, deployment, and evaluation of a `bot' module that is controlled by the instructor, and integrated into an online discussion forum. The bot assists the instructor by generating draft responses to student questions, which are reviewed, modified, and approved before release. Key features include the ability to leverage course materials, access archived discussions, and publish responses anonymously to encourage open participation. We report our experiences using this tool in a 12-week second-year software engineering course on object-oriented programming. Instructor feedback confirmed the tool successfully alleviated workload but highlighted a need for improvement in handling complex, context-dependent queries. We report the features that were viewed as most beneficial, and suggest avenues for future exploration.
Related papers
- Exploring Knowledge Tracing in Tutor-Student Dialogues using LLMs [49.18567856499736]
We investigate whether large language models (LLMs) can be supportive of open-ended dialogue tutoring.
We apply a range of knowledge tracing (KT) methods on the resulting labeled data to track student knowledge levels over an entire dialogue.
We conduct experiments on two tutoring dialogue datasets, and show that a novel yet simple LLM-based method, LLMKT, significantly outperforms existing KT methods in predicting student response correctness in dialogues.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - "My Grade is Wrong!": A Contestable AI Framework for Interactive Feedback in Evaluating Student Essays [6.810086342993699]
This paper introduces CAELF, a Contestable AI Empowered LLM Framework for automating interactive feedback.
CAELF allows students to query, challenge, and clarify their feedback by integrating a multi-agent system with computational argumentation.
A case study on 500 critical thinking essays with user studies demonstrates that CAELF significantly improves interactive feedback.
arXiv Detail & Related papers (2024-09-11T17:59:01Z) - Learning to Ask: When LLM Agents Meet Unclear Instruction [55.65312637965779]
Large language models (LLMs) can leverage external tools for addressing a range of tasks unattainable through language skills alone.
We evaluate the performance of LLMs tool-use under imperfect instructions, analyze the error patterns, and build a challenging tool-use benchmark called Noisy ToolBench.
We propose a novel framework, Ask-when-Needed (AwN), which prompts LLMs to ask questions to users whenever they encounter obstacles due to unclear instructions.
arXiv Detail & Related papers (2024-08-31T23:06:12Z) - Generating Situated Reflection Triggers about Alternative Solution Paths: A Case Study of Generative AI for Computer-Supported Collaborative Learning [3.2721068185888127]
We present a proof-of-concept application to offer students dynamic and contextualized feedback.
Specifically, we augment an Online Programming Exercise bot for a college-level Cloud Computing course with ChatGPT.
We demonstrate that LLMs can be used to generate highly situated reflection triggers that incorporate details of the collaborative discussion happening in context.
arXiv Detail & Related papers (2024-04-28T17:56:14Z) - Next-Step Hint Generation for Introductory Programming Using Large
Language Models [0.8002196839441036]
Large Language Models possess skills such as answering questions, writing essays or solving programming exercises.
This work explores how LLMs can contribute to programming education by supporting students with automated next-step hints.
arXiv Detail & Related papers (2023-12-03T17:51:07Z) - Patterns of Student Help-Seeking When Using a Large Language
Model-Powered Programming Assistant [2.5949084781328744]
This study examines students' use of an innovative tool that provides on-demand programming assistance without revealing solutions directly.
We collected more than 2,500 queries submitted by students throughout the term.
We found that most queries requested immediate help with programming assignments, whereas fewer requests asked for help on related concepts or for deepening conceptual understanding.
arXiv Detail & Related papers (2023-10-25T20:36:05Z) - Democratizing Reasoning Ability: Tailored Learning from Large Language
Model [97.4921006089966]
We propose a tailored learning approach to distill such reasoning ability to smaller LMs.
We exploit the potential of LLM as a reasoning teacher by building an interactive multi-round learning paradigm.
To exploit the reasoning potential of the smaller LM, we propose self-reflection learning to motivate the student to learn from self-made mistakes.
arXiv Detail & Related papers (2023-10-20T07:50:10Z) - Automatically Correcting Large Language Models: Surveying the landscape
of diverse self-correction strategies [104.32199881187607]
Large language models (LLMs) have demonstrated remarkable performance across a wide array of NLP tasks.
A promising approach to rectify these flaws is self-correction, where the LLM itself is prompted or guided to fix problems in its own output.
This paper presents a comprehensive review of this emerging class of techniques.
arXiv Detail & Related papers (2023-08-06T18:38:52Z) - A large language model-assisted education tool to provide feedback on
open-ended responses [2.624902795082451]
We present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions.
Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement.
arXiv Detail & Related papers (2023-07-25T19:49:55Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Curriculum Learning for Reinforcement Learning Domains: A Framework and
Survey [53.73359052511171]
Reinforcement learning (RL) is a popular paradigm for addressing sequential decision tasks in which the agent has only limited environmental feedback.
We present a framework for curriculum learning (CL) in RL, and use it to survey and classify existing CL methods in terms of their assumptions, capabilities, and goals.
arXiv Detail & Related papers (2020-03-10T20:41:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.