CourseAssist: Pedagogically Appropriate AI Tutor for Computer Science Education
- URL: http://arxiv.org/abs/2407.10246v3
- Date: Mon, 29 Jul 2024 23:01:18 GMT
- Title: CourseAssist: Pedagogically Appropriate AI Tutor for Computer Science Education
- Authors: Ty Feng, Sa Liu, Dipak Ghosal,
- Abstract summary: This poster introduces CourseAssist, a novel LLM-based tutoring system tailored for computer science education.
Unlike generic LLM systems, CourseAssist uses retrieval-augmented generation, user intent classification, and question decomposition to align AI responses with specific course materials and learning objectives.
- Score: 1.052788652996288
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The growing enrollments in computer science courses and increase in class sizes necessitate scalable, automated tutoring solutions to adequately support student learning. While Large Language Models (LLMs) like GPT-4 have demonstrated potential in assisting students through question-answering, educators express concerns over student overreliance, miscomprehension of generated code, and the risk of inaccurate answers. Rather than banning these tools outright, we advocate for a constructive approach that harnesses the capabilities of AI while mitigating potential risks. This poster introduces CourseAssist, a novel LLM-based tutoring system tailored for computer science education. Unlike generic LLM systems, CourseAssist uses retrieval-augmented generation, user intent classification, and question decomposition to align AI responses with specific course materials and learning objectives, thereby ensuring pedagogical appropriateness of LLMs in educational settings. We evaluated CourseAssist against a baseline of GPT-4 using a dataset of 50 question-answer pairs from a programming languages course, focusing on the criteria of usefulness, accuracy, and pedagogical appropriateness. Evaluation results show that CourseAssist significantly outperforms the baseline, demonstrating its potential to serve as an effective learning assistant. We have also deployed CourseAssist in 6 computer science courses at a large public R1 research university reaching over 500 students. Interviews with 20 student users show that CourseAssist improves computer science instruction by increasing the accessibility of course-specific tutoring help and shortening the feedback loop on their programming assignments. Future work will include extensive pilot testing at more universities and exploring better collaborative relationships between students, educators, and AI that improve computer science learning experiences.
Related papers
- Do Tutors Learn from Equity Training and Can Generative AI Assess It? [2.116573423199236]
We evaluate tutor performance within an online lesson on enhancing tutors' skills when responding to students in potentially inequitable situations.
We find marginally significant learning gains with increases in tutors' self-reported confidence in their knowledge.
This work makes available a dataset of lesson log data, tutor responses, rubrics for human annotation, and generative AI prompts.
arXiv Detail & Related papers (2024-12-15T17:36:40Z) - How Good is ChatGPT in Giving Adaptive Guidance Using Knowledge Graphs in E-Learning Environments? [0.8999666725996978]
This study introduces an approach that integrates dynamic knowledge graphs with large language models (LLMs) to offer nuanced student assistance.
Central to this method is the knowledge graph's role in assessing a student's comprehension of topic prerequisites.
Preliminary findings suggest students could benefit from this tiered support, achieving enhanced comprehension and improved task outcomes.
arXiv Detail & Related papers (2024-12-05T04:05:43Z) - Could ChatGPT get an Engineering Degree? Evaluating Higher Education Vulnerability to AI Assistants [176.39275404745098]
We evaluate whether two AI assistants, GPT-3.5 and GPT-4, can adequately answer assessment questions.
GPT-4 answers an average of 65.8% of questions correctly, and can even produce the correct answer across at least one prompting strategy for 85.1% of questions.
Our results call for revising program-level assessment design in higher education in light of advances in generative AI.
arXiv Detail & Related papers (2024-08-07T12:11:49Z) - Integrating AI Tutors in a Programming Course [0.0]
RAGMan is an LLM-powered tutoring system that can support a variety of course-specific and homework-specific AI tutors.
This paper describes the interactions the students had with the AI tutors, the students' feedback, and a comparative grade analysis.
arXiv Detail & Related papers (2024-07-14T00:42:39Z) - Evaluating and Optimizing Educational Content with Large Language Model Judgments [52.33701672559594]
We use Language Models (LMs) as educational experts to assess the impact of various instructions on learning outcomes.
We introduce an instruction optimization approach in which one LM generates instructional materials using the judgments of another LM as a reward function.
Human teachers' evaluations of these LM-generated worksheets show a significant alignment between the LM judgments and human teacher preferences.
arXiv Detail & Related papers (2024-03-05T09:09:15Z) - The Robots are Here: Navigating the Generative AI Revolution in
Computing Education [4.877774347152004]
Recent advancements in artificial intelligence (AI) are fundamentally reshaping computing.
Large language models (LLMs) now effectively being able to generate and interpret source code and natural language instructions.
These capabilities have sparked urgent questions around how educators should adapt their pedagogy to address the challenges.
arXiv Detail & Related papers (2023-10-01T12:54:37Z) - MathDial: A Dialogue Tutoring Dataset with Rich Pedagogical Properties
Grounded in Math Reasoning Problems [74.73881579517055]
We propose a framework to generate such dialogues by pairing human teachers with a Large Language Model prompted to represent common student errors.
We describe how we use this framework to collect MathDial, a dataset of 3k one-to-one teacher-student tutoring dialogues.
arXiv Detail & Related papers (2023-05-23T21:44:56Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Peer-inspired Student Performance Prediction in Interactive Online
Question Pools with Graph Neural Network [56.62345811216183]
We propose a novel approach using Graph Neural Networks (GNNs) to achieve better student performance prediction in interactive online question pools.
Specifically, we model the relationship between students and questions using student interactions to construct the student-interaction-question network.
We evaluate the effectiveness of our approach on a real-world dataset consisting of 104,113 mouse trajectories generated in the problem-solving process of over 4000 students on 1631 questions.
arXiv Detail & Related papers (2020-08-04T14:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.