Integrating AI Tutors in a Programming Course
- URL: http://arxiv.org/abs/2407.15718v1
- Date: Sun, 14 Jul 2024 00:42:39 GMT
- Title: Integrating AI Tutors in a Programming Course
- Authors: Iris Ma, Alberto Krone Martins, Cristina Videira Lopes,
- Abstract summary: RAGMan is an LLM-powered tutoring system that can support a variety of course-specific and homework-specific AI tutors.
This paper describes the interactions the students had with the AI tutors, the students' feedback, and a comparative grade analysis.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: RAGMan is an LLM-powered tutoring system that can support a variety of course-specific and homework-specific AI tutors. RAGMan leverages Retrieval Augmented Generation (RAG), as well as strict instructions, to ensure the alignment of the AI tutors' responses. By using RAGMan's AI tutors, students receive assistance with their specific homework assignments without directly obtaining solutions, while also having the ability to ask general programming-related questions. RAGMan was deployed as an optional resource in an introductory programming course with an enrollment of 455 students. It was configured as a set of five homework-specific AI tutors. This paper describes the interactions the students had with the AI tutors, the students' feedback, and a comparative grade analysis. Overall, about half of the students engaged with the AI tutors, and the vast majority of the interactions were legitimate homework questions. When students posed questions within the intended scope, the AI tutors delivered accurate responses 98% of the time. Within the students used AI tutors, 78% reported that the tutors helped their learning. Beyond AI tutors' ability to provide valuable suggestions, students reported appreciating them for fostering a safe learning environment free from judgment.
Related papers
- A Benchmark for Math Misconceptions: Bridging Gaps in Middle School Algebra with AI-Supported Instruction [0.0]
This study introduces an evaluation benchmark for middle school algebra to be used in artificial intelligence based educational platforms.
The data set comprises 55 misconceptions about algebra, common errors, and 220 diagnostic examples.
Four out of five educators expressed interest in using the data set with AI to diagnose student misconceptions or train teachers.
arXiv Detail & Related papers (2024-12-04T23:10:29Z) - A Tutorial on Teaching Data Analytics with Generative AI [0.0]
This tutorial addresses the challenge of incorporating large language models (LLMs) in a data analytics class.
It details several new in-class and out-of-class teaching techniques enabled by AI.
arXiv Detail & Related papers (2024-10-25T05:27:48Z) - GPT-4 as a Homework Tutor can Improve Student Engagement and Learning Outcomes [80.60912258178045]
We developed a prompting strategy that enables GPT-4 to conduct interactive homework sessions for high-school students learning English as a second language.
We carried out a Randomized Controlled Trial (RCT) in four high-school classes, replacing traditional homework with GPT-4 homework sessions for the treatment group.
We observed significant improvements in learning outcomes, specifically a greater gain in grammar, and student engagement.
arXiv Detail & Related papers (2024-09-24T11:22:55Z) - A Multi-Year Grey Literature Review on AI-assisted Test Automation [46.97326049485643]
Test Automation (TA) techniques are crucial for quality assurance in software engineering but face limitations.
Given the prevalent usage of AI in industry, sources of truth are held in grey literature as well as the minds of professionals.
This study surveys grey literature to explore how AI is adopted in TA, focusing on the problems it solves, its solutions, and the available tools.
arXiv Detail & Related papers (2024-08-12T15:26:36Z) - Could ChatGPT get an Engineering Degree? Evaluating Higher Education Vulnerability to AI Assistants [176.39275404745098]
We evaluate whether two AI assistants, GPT-3.5 and GPT-4, can adequately answer assessment questions.
GPT-4 answers an average of 65.8% of questions correctly, and can even produce the correct answer across at least one prompting strategy for 85.1% of questions.
Our results call for revising program-level assessment design in higher education in light of advances in generative AI.
arXiv Detail & Related papers (2024-08-07T12:11:49Z) - CourseAssist: Pedagogically Appropriate AI Tutor for Computer Science Education [1.052788652996288]
This poster introduces CourseAssist, a novel LLM-based tutoring system tailored for computer science education.
Unlike generic LLM systems, CourseAssist uses retrieval-augmented generation, user intent classification, and question decomposition to align AI responses with specific course materials and learning objectives.
arXiv Detail & Related papers (2024-05-01T20:43:06Z) - Assigning AI: Seven Approaches for Students, with Prompts [0.0]
This paper examines the transformative role of Large Language Models (LLMs) in education and their potential as learning tools.
The authors propose seven approaches for utilizing AI in classrooms: AI-tutor, AI-coach, AI-mentor, AI-teammate, AI-tool, AI-simulator, and AI-student.
arXiv Detail & Related papers (2023-06-13T03:36:36Z) - UKP-SQuARE: An Interactive Tool for Teaching Question Answering [61.93372227117229]
The exponential growth of question answering (QA) has made it an indispensable topic in any Natural Language Processing (NLP) course.
We introduce UKP-SQuARE as a platform for QA education.
Students can run, compare, and analyze various QA models from different perspectives.
arXiv Detail & Related papers (2023-05-31T11:29:04Z) - MathDial: A Dialogue Tutoring Dataset with Rich Pedagogical Properties
Grounded in Math Reasoning Problems [74.73881579517055]
We propose a framework to generate such dialogues by pairing human teachers with a Large Language Model prompted to represent common student errors.
We describe how we use this framework to collect MathDial, a dataset of 3k one-to-one teacher-student tutoring dialogues.
arXiv Detail & Related papers (2023-05-23T21:44:56Z) - Smart tutor to provide feedback in programming courses [0.0]
We present an AI based intelligent tutor that answers students programming questions.
The tool has been tested by university students at the URJC along a whole course.
arXiv Detail & Related papers (2023-01-24T11:00:06Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.