Perspective on Code Submission and Automated Evaluation Platforms for
University Teaching
- URL: http://arxiv.org/abs/2201.13222v1
- Date: Tue, 25 Jan 2022 10:06:45 GMT
- Title: Perspective on Code Submission and Automated Evaluation Platforms for
University Teaching
- Authors: Florian Auer, Johann Frei, Dominik M\"uller and Frank Kramer
- Abstract summary: We present a perspective on platforms for code submission and automated evaluation in the context of university teaching.
We identify relevant technical and non-technical requirements for such platforms in terms of practical applicability and secure code submission environments.
We conclude that submission and automated evaluation involves continuous maintenance yet lowers the required workload for teachers and provides better evaluation transparency for students.
- Score: 1.6172800007896284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a perspective on platforms for code submission and automated
evaluation in the context of university teaching. Due to the COVID-19 pandemic,
such platforms have become an essential asset for remote courses and a
reasonable standard for structured code submission concerning increasing
numbers of students in computer sciences. Utilizing automated code evaluation
techniques exhibits notable positive impacts for both students and teachers in
terms of quality and scalability. We identified relevant technical and
non-technical requirements for such platforms in terms of practical
applicability and secure code submission environments. Furthermore, a survey
among students was conducted to obtain empirical data on general perception. We
conclude that submission and automated evaluation involves continuous
maintenance yet lowers the required workload for teachers and provides better
evaluation transparency for students.
Related papers
- A Benchmark for Fairness-Aware Graph Learning [58.515305543487386]
We present an extensive benchmark on ten representative fairness-aware graph learning methods.
Our in-depth analysis reveals key insights into the strengths and limitations of existing methods.
arXiv Detail & Related papers (2024-07-16T18:43:43Z) - Towards a low-cost universal access cloud framework to assess STEM
students [0.0]
Government-imposed lockdowns have challenged academic institutions to transition from traditional face-to-face education into hybrid or fully remote learning models.
This paper tailored and implemented a cloud deployment to provide universal access to online assessment of university students in a computer programming course.
arXiv Detail & Related papers (2024-01-31T09:45:41Z) - Identifying Student Profiles Within Online Judge Systems Using
Explainable Artificial Intelligence [6.638206014723678]
Online Judge (OJ) systems are typically considered within programming-related courses as they yield fast and objective assessments of the code developed by the students.
This work aims to tackle this limitation by considering the further exploitation of the information gathered by the OJ and automatically inferring feedback for both the student and the instructor.
arXiv Detail & Related papers (2024-01-29T12:11:30Z) - A Design and Development of Rubrics System for Android Applications [0.0]
This application aims to provide an user-friendly interface for viewing the students performance.
Our application promises to make the grading system easier and to enhance the effectiveness in terms of time and resources.
arXiv Detail & Related papers (2023-09-23T16:14:27Z) - UKP-SQuARE: An Interactive Tool for Teaching Question Answering [61.93372227117229]
The exponential growth of question answering (QA) has made it an indispensable topic in any Natural Language Processing (NLP) course.
We introduce UKP-SQuARE as a platform for QA education.
Students can run, compare, and analyze various QA models from different perspectives.
arXiv Detail & Related papers (2023-05-31T11:29:04Z) - Modelling Assessment Rubrics through Bayesian Networks: a Pragmatic
Approach [59.77710485234197]
This paper presents an approach to deriving a learner model directly from an assessment rubric.
We illustrate how the approach can be applied to automatize the human assessment of an activity developed for testing computational thinking skills.
arXiv Detail & Related papers (2022-09-07T10:09:12Z) - Building an Effective Automated Assessment System for C/C++ Introductory
Programming Courses in ODL Environment [0.0]
Traditional ways of assessing students' work are becoming insufficient in terms of both time and effort.
In distance education environment, such assessments become additionally more challenging in terms of hefty remuneration for hiring large number of tutors.
We identify different components that we believe are necessary in building an effective automated assessment system.
arXiv Detail & Related papers (2022-05-24T09:20:43Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Hierarchical Bi-Directional Self-Attention Networks for Paper Review
Rating Recommendation [81.55533657694016]
We propose a Hierarchical bi-directional self-attention Network framework (HabNet) for paper review rating prediction and recommendation.
Specifically, we leverage the hierarchical structure of the paper reviews with three levels of encoders: sentence encoder (level one), intra-review encoder (level two) and inter-review encoder (level three)
We are able to identify useful predictors to make the final acceptance decision, as well as to help discover the inconsistency between numerical review ratings and text sentiment conveyed by reviewers.
arXiv Detail & Related papers (2020-11-02T08:07:50Z) - Value Cards: An Educational Toolkit for Teaching Social Impacts of
Machine Learning through Deliberation [32.74513588794863]
Value Card is an educational toolkit to inform students and practitioners of the social impacts of different machine learning models via deliberation.
Our results suggest that the use of the Value Cards toolkit can improve students' understanding of both the technical definitions and trade-offs of performance metrics.
arXiv Detail & Related papers (2020-10-22T03:27:19Z) - SelfAugment: Automatic Augmentation Policies for Self-Supervised
Learning [98.2036247050674]
We show that evaluating the learned representations with a self-supervised image rotation task is highly correlated with a standard set of supervised evaluations.
We provide an algorithm (SelfAugment) to automatically and efficiently select augmentation policies without using supervised evaluations.
arXiv Detail & Related papers (2020-09-16T14:49:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.