Automated grading workflows for providing personalized feedback to
open-ended data science assignments
- URL: http://arxiv.org/abs/2309.12924v2
- Date: Thu, 29 Feb 2024 18:22:15 GMT
- Title: Automated grading workflows for providing personalized feedback to
open-ended data science assignments
- Authors: Federica Zoe Ricci and Catalina Mari Medina and Mine Dogucu
- Abstract summary: In this paper, we discuss the steps of a typical grading workflow and highlight which steps can be automated in an approach that we call automated grading workflow.
We illustrate how gradetools, a new R package, implements this approach within RStudio to facilitate efficient and consistent grading while providing individualized feedback.
- Score: 1.534667887016089
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Open-ended assignments - such as lab reports and semester-long projects -
provide data science and statistics students with opportunities for developing
communication, critical thinking, and creativity skills. However, providing
grades and formative feedback to open-ended assignments can be very time
consuming and difficult to do consistently across students. In this paper, we
discuss the steps of a typical grading workflow and highlight which steps can
be automated in an approach that we call automated grading workflow. We
illustrate how gradetools, a new R package, implements this approach within
RStudio to facilitate efficient and consistent grading while providing
individualized feedback. By outlining the motivations behind the development of
this package and the considerations underlying its design, we hope this article
will provide data science and statistics educators with ideas for improving
their grading workflows, possibly developing new grading tools or considering
use gradetools as their grading workflow assistant.
Related papers
- DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback [62.235925602004535]
We introduce DataEnvGym, a testbed of teacher environments for data generation agents.
DataEnvGym frames data generation as a sequential decision-making task.
Agent's goal is to improve student performance.
We support 3 diverse tasks (math, code, and VQA) and test multiple students and teachers.
arXiv Detail & Related papers (2024-10-08T17:20:37Z) - "I understand why I got this grade": Automatic Short Answer Grading with Feedback [36.74896284581596]
We present a dataset of 5.8k student answers accompanied by reference answers and questions for the Automatic Short Answer Grading (ASAG) task.
The EngSAF dataset is meticulously curated to cover a diverse range of subjects, questions, and answer patterns from multiple engineering domains.
arXiv Detail & Related papers (2024-06-30T15:42:18Z) - IntCoOp: Interpretability-Aware Vision-Language Prompt Tuning [94.52149969720712]
IntCoOp learns to jointly align attribute-level inductive biases and class embeddings during prompt-tuning.
IntCoOp improves CoOp by 7.35% in average performance across 10 diverse datasets.
arXiv Detail & Related papers (2024-06-19T16:37:31Z) - Grade Like a Human: Rethinking Automated Assessment with Large Language Models [11.442433408767583]
Large language models (LLMs) have been used for automated grading, but they have not yet achieved the same level of performance as humans.
We propose an LLM-based grading system that addresses the entire grading procedure, including the following key components.
arXiv Detail & Related papers (2024-05-30T05:08:15Z) - WIP: A Unit Testing Framework for Self-Guided Personalized Online Robotics Learning [3.613641107321095]
This paper focuses on creating a system for unit testing while integrating it into the course workflow.
In line with the framework's personalized student-centered approach, this method makes it easier for students to revise, and debug their programming work.
The course workflow updated to include unit tests will strengthen the learning environment and make it more interactive so that students can learn how to program robots in a self-guided fashion.
arXiv Detail & Related papers (2024-05-18T00:56:46Z) - Enhancing the Performance of Automated Grade Prediction in MOOC using
Graph Representation Learning [3.4882560718166626]
Massive Open Online Courses (MOOCs) have gained significant traction as a rapidly growing phenomenon in online learning.
Current automated assessment approaches overlook the structural links between different entities involved in the downstream tasks.
We construct a unique knowledge graph for a large MOOC dataset, which will be publicly available to the research community.
arXiv Detail & Related papers (2023-10-18T19:27:39Z) - Unsupervised Task Graph Generation from Instructional Video Transcripts [53.54435048879365]
We consider a setting where text transcripts of instructional videos performing a real-world activity are provided.
The goal is to identify the key steps relevant to the task as well as the dependency relationship between these key steps.
We propose a novel task graph generation approach that combines the reasoning capabilities of instruction-tuned language models along with clustering and ranking components.
arXiv Detail & Related papers (2023-02-17T22:50:08Z) - Nemo: Guiding and Contextualizing Weak Supervision for Interactive Data
Programming [77.38174112525168]
We present Nemo, an end-to-end interactive Supervision system that improves overall productivity of WS learning pipeline by an average 20% (and up to 47% in one task) compared to the prevailing WS supervision approach.
arXiv Detail & Related papers (2022-03-02T19:57:32Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - How Useful is Self-Supervised Pretraining for Visual Tasks? [133.1984299177874]
We evaluate various self-supervised algorithms across a comprehensive array of synthetic datasets and downstream tasks.
Our experiments offer insights into how the utility of self-supervision changes as the number of available labels grows.
arXiv Detail & Related papers (2020-03-31T16:03:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.