The impact of students behaviour, their approach, emotions and problem
difficulty level on the performance prediction, evaluation and overall
learning process during online coding activities
- URL: http://arxiv.org/abs/2112.14407v1
- Date: Wed, 29 Dec 2021 06:11:01 GMT
- Title: The impact of students behaviour, their approach, emotions and problem
difficulty level on the performance prediction, evaluation and overall
learning process during online coding activities
- Authors: Dr. Hardik Patel, Dr. Purvi Koringa
- Abstract summary: Two online coding assignments or competitions are conducted with a 1-hour time limit.
A survey has been conducted at the end of each coding test and answers to different questions have been collected.
Two coding assignments or competitions are analyzed through in-depth research on 229 (first coding competition dataset) and 325 (second coding competition dataset) data points.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Learning process while solving coding problems is quite complex to
understand. It is extremely important to understand the skills which are
required and gained during learning to code. As a first step to understand the
students behaviour and approach during learning coding, two online coding
assignments or competitions are conducted with a 1-hour time limit. A survey
has been conducted at the end of each coding test and answers to different
questions have been collected. In depth statistical analysis is done to
understand the learning process while solving the coding problems. It involves
lots of parameters including students behaviour, their approach and difficulty
level of coding problems. The inclusion of mood and emotions related questions
can improve overall prediction performance but difficulty level matters in the
submission status prediction. Two coding assignments or competitions are
analyzed through in-depth research on 229 (first coding competition dataset)
and 325 (second coding competition dataset) data points. The primary results
are promising and these results give in depth insights about how learning to
solve coding problems is affected by students behaviour, their approach,
emotions and problem difficulty level.
Related papers
- Probing the Unknown: Exploring Student Interactions with Probeable Problems at Scale in Introductory Programming [4.1153199495993364]
This study explores the use of Probeable Problems'', automatically gradable tasks that have deliberately vague or incomplete specifications.
Such problems require students to submit test inputs, or probes', to clarify requirements before implementation.
Systematic strategies, such as thoroughly exploring expected behavior before coding, resulted in fewer incorrect code submissions and correlated with course success.
arXiv Detail & Related papers (2025-04-16T02:50:00Z) - Knowledge Tracing in Programming Education Integrating Students' Questions [0.0]
This paper introduces SQKT (Students' Question-based Knowledge Tracing), a knowledge tracing model that leverages students' questions and automatically extracted skill information.
Experimental results demonstrate SQKT's superior performance in predicting student completion across various Python programming courses of differing difficulty levels.
SQKT can be used to tailor educational content to individual learning needs and design adaptive learning systems in computer science education.
arXiv Detail & Related papers (2025-01-22T14:13:40Z) - Integrating Natural Language Prompting Tasks in Introductory Programming Courses [3.907735250728617]
This report explores the inclusion of two prompt-focused activities in an introductory programming course.
The first requires students to solve computational problems by writing natural language prompts, emphasizing problem-solving over syntax.
The second involves students crafting prompts to generate code equivalent to provided fragments, to foster an understanding of the relationship between prompts and code.
arXiv Detail & Related papers (2024-10-04T01:03:25Z) - Estimating Difficulty Levels of Programming Problems with Pre-trained Model [18.92661958433282]
The difficulty level of each programming problem serves as an essential reference for guiding students' adaptive learning.
We formulate the problem of automatic difficulty level estimation of each programming problem, given its textual description and a solution example of code.
For tackling this problem, we propose to couple two pre-trained models, one for text modality and the other for code modality, into a unified model.
arXiv Detail & Related papers (2024-06-13T05:38:20Z) - Comparison of Three Programming Error Measures for Explaining Variability in CS1 Grades [11.799817851619757]
This study examined the relationships between students' rate of programming errors and their grades on two exams.
Data were collected from 280 students in a Java programming course.
arXiv Detail & Related papers (2024-04-09T03:45:15Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - Towards a Holistic Understanding of Mathematical Questions with
Contrastive Pre-training [65.10741459705739]
We propose a novel contrastive pre-training approach for mathematical question representations, namely QuesCo.
We first design two-level question augmentations, including content-level and structure-level, which generate literally diverse question pairs with similar purposes.
Then, to fully exploit hierarchical information of knowledge concepts, we propose a knowledge hierarchy-aware rank strategy.
arXiv Detail & Related papers (2023-01-18T14:23:29Z) - JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem
Understanding [74.12405417718054]
This paper aims to advance the mathematical intelligence of machines by presenting the first Chinese mathematical pre-trained language model(PLM)
Unlike other standard NLP tasks, mathematical texts are difficult to understand, since they involve mathematical terminology, symbols and formulas in the problem statement.
We design a novel curriculum pre-training approach for improving the learning of mathematical PLMs, consisting of both basic and advanced courses.
arXiv Detail & Related papers (2022-06-13T17:03:52Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Variable-Shot Adaptation for Online Meta-Learning [123.47725004094472]
We study the problem of learning new tasks from a small, fixed number of examples, by meta-learning across static data from a set of previous tasks.
We find that meta-learning solves the full task set with fewer overall labels and greater cumulative performance, compared to standard supervised methods.
These results suggest that meta-learning is an important ingredient for building learning systems that continuously learn and improve over a sequence of problems.
arXiv Detail & Related papers (2020-12-14T18:05:24Z) - When is Memorization of Irrelevant Training Data Necessary for
High-Accuracy Learning? [53.523017945443115]
We describe natural prediction problems in which every sufficiently accurate training algorithm must encode, in the prediction model, essentially all the information about a large subset of its training examples.
Our results do not depend on the training algorithm or the class of models used for learning.
arXiv Detail & Related papers (2020-12-11T15:25:14Z) - Adversarial Training for Code Retrieval with Question-Description
Relevance Regularization [34.29822107097347]
We adapt a simple adversarial learning technique to generate difficult code snippets given the input question.
We propose to leverage question-description relevance to regularize adversarial learning.
Our adversarial learning method is able to improve the performance of state-of-the-art models.
arXiv Detail & Related papers (2020-10-19T19:32:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.