Leveraging Large Language Models for Actionable Course Evaluation Student Feedback to Lecturers
- URL: http://arxiv.org/abs/2407.01274v2
- Date: Tue, 2 Jul 2024 08:47:45 GMT
- Title: Leveraging Large Language Models for Actionable Course Evaluation Student Feedback to Lecturers
- Authors: Mike Zhang, Euan D Lindsay, Frederik Bode Thorbensen, Danny Bøgsted Poulsen, Johannes Bjerva,
- Abstract summary: We have 742 student responses ranging over 75 courses in a Computer Science department.
For each course, we synthesise a summary of the course evaluations and actionable items for the instructor.
Our work highlights the possibility of using generative AI to produce factual, actionable, and appropriate feedback for teachers in the classroom setting.
- Score: 6.161370712594005
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: End of semester student evaluations of teaching are the dominant mechanism for providing feedback to academics on their teaching practice. For large classes, however, the volume of feedback makes these tools impractical for this purpose. This paper explores the use of open-source generative AI to synthesise factual, actionable and appropriate summaries of student feedback from these survey responses. In our setup, we have 742 student responses ranging over 75 courses in a Computer Science department. For each course, we synthesise a summary of the course evaluations and actionable items for the instructor. Our results reveal a promising avenue for enhancing teaching practices in the classroom setting. Our contribution lies in demonstrating the feasibility of using generative AI to produce insightful feedback for teachers, thus providing a cost-effective means to support educators' development. Overall, our work highlights the possibility of using generative AI to produce factual, actionable, and appropriate feedback for teachers in the classroom setting.
Related papers
- Representational Alignment Supports Effective Machine Teaching [81.19197059407121]
GRADE is a new controlled experimental setting to study pedagogy and representational alignment.
We find that improved representational alignment with a student improves student learning outcomes.
However, this effect is moderated by the size and representational diversity of the class being taught.
arXiv Detail & Related papers (2024-06-06T17:48:24Z) - Evaluating and Optimizing Educational Content with Large Language Model Judgments [52.33701672559594]
We use Language Models (LMs) as educational experts to assess the impact of various instructions on learning outcomes.
We introduce an instruction optimization approach in which one LM generates instructional materials using the judgments of another LM as a reward function.
Human teachers' evaluations of these LM-generated worksheets show a significant alignment between the LM judgments and human teacher preferences.
arXiv Detail & Related papers (2024-03-05T09:09:15Z) - Measuring Five Accountable Talk Moves to Improve Instruction at Scale [1.4549461207028445]
We fine-tune models to identify five instructional talk moves inspired by accountable talk theory.
We correlate the instructors' use of each talk move with indicators of student engagement and satisfaction.
These results corroborate previous research on the effectiveness of accountable talk moves.
arXiv Detail & Related papers (2023-11-02T03:04:50Z) - PapagAI:Automated Feedback for Reflective Essays [48.4434976446053]
We present the first open-source automated feedback tool based on didactic theory and implemented as a hybrid AI system.
The main objective of our work is to enable better learning outcomes for students and to complement the teaching activities of lecturers.
arXiv Detail & Related papers (2023-07-10T11:05:51Z) - Using Large Language Models to Provide Explanatory Feedback to Human
Tutors [3.2507682694499582]
We present two approaches for supplying tutors real-time feedback within an online lesson on how to give students effective praise.
This work-in-progress demonstrates considerable accuracy in binary classification for corrective feedback of effective, or effort-based.
More notably, we introduce progress towards an enhanced approach of providing explanatory feedback using large language model-facilitated named entity recognition.
arXiv Detail & Related papers (2023-06-27T14:19:12Z) - Is ChatGPT a Good Teacher Coach? Measuring Zero-Shot Performance For
Scoring and Providing Actionable Insights on Classroom Instruction [5.948322127194399]
We investigate whether generative AI could become a cost-effective complement to expert feedback by serving as an automated teacher coach.
We propose three teacher coaching tasks for generative AI: (A) scoring transcript segments based on classroom observation instruments, (B) identifying highlights and missed opportunities for good instructional strategies, and (C) providing actionable suggestions for eliciting more student reasoning.
We recruit expert math teachers to evaluate the zero-shot performance of ChatGPT on each of these tasks for elementary classroom math transcripts.
arXiv Detail & Related papers (2023-06-05T17:59:21Z) - Iterative Teacher-Aware Learning [136.05341445369265]
In human pedagogy, teachers and students can interact adaptively to maximize communication efficiency.
We propose a gradient optimization based teacher-aware learner who can incorporate teacher's cooperative intention into the likelihood function.
arXiv Detail & Related papers (2021-10-01T00:27:47Z) - A literature survey on student feedback assessment tools and their usage
in sentiment analysis [0.0]
We evaluate the effectiveness of various in-class feedback assessment methods such as Kahoot!, Mentimeter, Padlet, and polling.
We propose a sentiment analysis model for extracting the explicit suggestions from the students' qualitative feedback comments.
arXiv Detail & Related papers (2021-09-09T06:56:30Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Distribution Matching for Machine Teaching [64.39292542263286]
Machine teaching is an inverse problem of machine learning that aims at steering the student learner towards its target hypothesis.
Previous studies on machine teaching focused on balancing the teaching risk and cost to find those best teaching examples.
This paper presents a distribution matching-based machine teaching strategy.
arXiv Detail & Related papers (2021-05-06T09:32:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.