An Analysis of Programming Course Evaluations Before and After the
Introduction of an Autograder
- URL: http://arxiv.org/abs/2110.15134v2
- Date: Mon, 24 Jul 2023 20:05:38 GMT
- Title: An Analysis of Programming Course Evaluations Before and After the
Introduction of an Autograder
- Authors: Gerhard Johann Hagerer, Laura Lahesoo, Miriam Ansch\"utz, Stephan
Krusche, Georg Groh
- Abstract summary: This paper studies the answers to the standardized university evaluation questionnaires of foundational computer science courses which recently introduced autograding.
We hypothesize how the autograder might have contributed to the significant changes in the data, such as, improved interactions between tutors and students, improved overall course quality, improved learning success, increased time spent, and reduced difficulty.
The autograder technology can be validated as a teaching method to improve student satisfaction with programming courses.
- Score: 1.329950749508442
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Commonly, introductory programming courses in higher education institutions
have hundreds of participating students eager to learn to program. The manual
effort for reviewing the submitted source code and for providing feedback can
no longer be managed. Manually reviewing the submitted homework can be
subjective and unfair, particularly if many tutors are responsible for grading.
Different autograders can help in this situation; however, there is a lack of
knowledge about how autograders can impact students' overall perception of
programming classes and teaching. This is relevant for course organizers and
institutions to keep their programming courses attractive while coping with
increasing students.
This paper studies the answers to the standardized university evaluation
questionnaires of multiple large-scale foundational computer science courses
which recently introduced autograding. The differences before and after this
intervention are analyzed. By incorporating additional observations, we
hypothesize how the autograder might have contributed to the significant
changes in the data, such as, improved interactions between tutors and
students, improved overall course quality, improved learning success, increased
time spent, and reduced difficulty. This qualitative study aims to provide
hypotheses for future research to define and conduct quantitative surveys and
data analysis. The autograder technology can be validated as a teaching method
to improve student satisfaction with programming courses.
Related papers
- Could ChatGPT get an Engineering Degree? Evaluating Higher Education Vulnerability to AI Assistants [175.9723801486487]
We evaluate whether two AI assistants, GPT-3.5 and GPT-4, can adequately answer assessment questions.
GPT-4 answers an average of 65.8% of questions correctly, and can even produce the correct answer across at least one prompting strategy for 85.1% of questions.
Our results call for revising program-level assessment design in higher education in light of advances in generative AI.
arXiv Detail & Related papers (2024-08-07T12:11:49Z) - CourseAssist: Pedagogically Appropriate AI Tutor for Computer Science Education [1.052788652996288]
This poster introduces CourseAssist, a novel LLM-based tutoring system tailored for computer science education.
Unlike generic LLM systems, CourseAssist uses retrieval-augmented generation, user intent classification, and question decomposition to align AI responses with specific course materials and learning objectives.
arXiv Detail & Related papers (2024-05-01T20:43:06Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - Intelligent Tutoring System: Experience of Linking Software Engineering
and Programming Teaching [11.732008724228798]
Existing systems that handle automated grading primarily focus on the automation of test case executions.
We have built an intelligent tutoring system that has the capability of providing automated feedback and grading.
arXiv Detail & Related papers (2023-10-09T07:28:41Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Building an Effective Automated Assessment System for C/C++ Introductory
Programming Courses in ODL Environment [0.0]
Traditional ways of assessing students' work are becoming insufficient in terms of both time and effort.
In distance education environment, such assessments become additionally more challenging in terms of hefty remuneration for hiring large number of tutors.
We identify different components that we believe are necessary in building an effective automated assessment system.
arXiv Detail & Related papers (2022-05-24T09:20:43Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Personalized Education in the AI Era: What to Expect Next? [76.37000521334585]
The objective of personalized learning is to design an effective knowledge acquisition track that matches the learner's strengths and bypasses her weaknesses to meet her desired goal.
In recent years, the boost of artificial intelligence (AI) and machine learning (ML) has unfolded novel perspectives to enhance personalized education.
arXiv Detail & Related papers (2021-01-19T12:23:32Z) - Effects of Human vs. Automatic Feedback on Students' Understanding of AI
Concepts and Programming Style [0.0]
The use of automatic grading tools has become nearly ubiquitous in large undergraduate programming courses.
There is a relative lack of data directly comparing student outcomes when receiving computer-generated feedback and human-written feedback.
This paper addresses this gap by splitting one 90-student class into two feedback groups and analyzing differences in the two cohorts' performance.
arXiv Detail & Related papers (2020-11-20T21:40:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.