Ungraded Assignments in Introductory Computing: A Report
- URL: http://arxiv.org/abs/2512.23004v1
- Date: Sun, 28 Dec 2025 17:09:45 GMT
- Title: Ungraded Assignments in Introductory Computing: A Report
- Authors: Yehya Sleiman Tellawi, Abhishek K. Umrawal,
- Abstract summary: This experience report explores the effects of ungraded assignments on the learning experience of students in an introductory computing course.<n>Our study examines the impact of ungraded assignments on student engagement, understanding, and overall academic performance.
- Score: 0.8379286663107844
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This experience report explores the effects of ungraded assignments on the learning experience of students in an introductory computing course. Our study examines the impact of ungraded assignments on student engagement, understanding, and overall academic performance. We developed and administered new ungraded assignments for a required course in the first year of the Computer Engineering curriculum called ECE 120 Introduction to Computing. To assess the effectiveness of our ungraded assignments, we employed a mixed-methods approach, including surveys, interviews, and performance analysis. Our analysis shows a positive relationship between participation in ungraded assignments and overall course performance, suggesting these assignments may appeal to high-achieving students and/or support better outcomes.
Related papers
- Exposía: Academic Writing Assessment of Exposés and Peer Feedback [56.428320613219306]
We present Exposa, the first public dataset that connects writing and feedback assessment in higher education.<n>We use Exposa to benchmark state-of-the-art open-source large language models (LLMs) for two tasks: automated scoring of (1) the proposals and (2) the student reviews.
arXiv Detail & Related papers (2026-01-10T11:33:26Z) - A Survey on Feedback Types in Automated Programming Assessment Systems [3.9845307287664973]
This study investigates how different feedback mechanisms in APASs are perceived by students, and how effective they are in supporting problem-solving.<n>Results indicate that while students rate unit test feedback as the most helpful, AI-generated feedback leads to significantly better performances.
arXiv Detail & Related papers (2025-10-21T09:08:22Z) - Assessing Engineering Student Perceptions of Introductory CS Courses in an Indian Context [6.237405036268818]
This study explores engineering students' perceptions of assessment practices in an introductory computer science/ programming course.<n>Students largely perceive lab assignments as effective learning activities and view exams and projects as authentic and skill-enhancing.<n>Students appreciated the role of instructors in shaping course content and found teaching assistants to be approachable and helpful.
arXiv Detail & Related papers (2025-08-06T19:04:19Z) - Monocle: Hybrid Local-Global In-Context Evaluation for Long-Text Generation with Uncertainty-Based Active Learning [63.531262595858]
Divide-and-conquer approach breaks comprehensive evaluation task into localized scoring tasks, followed by a final global assessment.<n>We introduce a hybrid in-context learning approach that leverages human annotations to enhance the performance of both local and global evaluations.<n>Finally, we develop an uncertainty-based active learning algorithm that efficiently selects data samples for human annotation.
arXiv Detail & Related papers (2025-05-26T16:39:41Z) - The Potential of Answer Classes in Large-scale Written Computer-Science Exams -- Vol. 2 [0.0]
In teacher training for secondary education, assessment guidelines are mandatory for every exam.<n>We apply this concept to a university exam with 462 students and 41 tasks.<n>For each task, instructors developed answer classes -- classes of expected responses.
arXiv Detail & Related papers (2024-12-12T10:20:39Z) - Evaluating and Optimizing Educational Content with Large Language Model Judgments [52.33701672559594]
We use Language Models (LMs) as educational experts to assess the impact of various instructions on learning outcomes.
We introduce an instruction optimization approach in which one LM generates instructional materials using the judgments of another LM as a reward function.
Human teachers' evaluations of these LM-generated worksheets show a significant alignment between the LM judgments and human teacher preferences.
arXiv Detail & Related papers (2024-03-05T09:09:15Z) - Improving Students With Rubric-Based Self-Assessment and Oral Feedback [2.808134646037882]
rubrics and oral feedback are approaches to help students improve performance and meet learning outcomes.
This paper evaluates the effect of rubrics and oral feedback on student learning outcomes.
arXiv Detail & Related papers (2023-07-24T14:48:28Z) - Distantly-Supervised Named Entity Recognition with Adaptive Teacher
Learning and Fine-grained Student Ensemble [56.705249154629264]
Self-training teacher-student frameworks are proposed to improve the robustness of NER models.
In this paper, we propose an adaptive teacher learning comprised of two teacher-student networks.
Fine-grained student ensemble updates each fragment of the teacher model with a temporal moving average of the corresponding fragment of the student, which enhances consistent predictions on each model fragment against noise.
arXiv Detail & Related papers (2022-12-13T12:14:09Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Unsupervised Reference-Free Summary Quality Evaluation via Contrastive
Learning [66.30909748400023]
We propose to evaluate the summary qualities without reference summaries by unsupervised contrastive learning.
Specifically, we design a new metric which covers both linguistic qualities and semantic informativeness based on BERT.
Experiments on Newsroom and CNN/Daily Mail demonstrate that our new evaluation method outperforms other metrics even without reference summaries.
arXiv Detail & Related papers (2020-10-05T05:04:14Z) - Computational Models for Academic Performance Estimation [21.31653695065347]
This paper presents an in-depth analysis of deep learning and machine learning approaches for the formulation of an automated students' performance estimation system.
Our main contributions are (a) a large dataset with fifteen courses (shared publicly for academic research) (b) statistical analysis and ablations on the estimation problem for this dataset.
Unlike previous approaches that rely on feature engineering or logical function deduction, our approach is fully data-driven and thus highly generic with better performance across different prediction tasks.
arXiv Detail & Related papers (2020-09-06T07:31:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.