Understanding the Effects of Using Parsons Problems to Scaffold Code
Writing for Students with Varying CS Self-Efficacy Levels
- URL: http://arxiv.org/abs/2311.18115v1
- Date: Wed, 29 Nov 2023 22:02:46 GMT
- Title: Understanding the Effects of Using Parsons Problems to Scaffold Code
Writing for Students with Varying CS Self-Efficacy Levels
- Authors: Xinying Hou, Barbara J. Ericson, Xu Wang
- Abstract summary: We investigated the impact of using Parsons problems as a code-writing scaffold for students with varying levels of CS self-efficacy.
For students with low CS self-efficacy levels, those who received scaffolding achieved significantly higher practice performance and in-practice problem-solving efficiency.
Students with higher pre-practice knowledge on the topic were more likely to effectively use the Parsons scaffolding.
- Score: 7.277912553209182
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Introductory programming courses aim to teach students to write code
independently. However, transitioning from studying worked examples to
generating their own code is often difficult and frustrating for students,
especially those with lower CS self-efficacy in general. Therefore, we
investigated the impact of using Parsons problems as a code-writing scaffold
for students with varying levels of CS self-efficacy. Parsons problems are
programming tasks where students arrange mixed-up code blocks in the correct
order. We conducted a between-subjects study with undergraduate students (N=89)
on a topic where students have limited code-writing expertise. Students were
randomly assigned to one of two conditions. Students in one condition practiced
writing code without any scaffolding, while students in the other condition
were provided with scaffolding in the form of an equivalent Parsons problem. We
found that, for students with low CS self-efficacy levels, those who received
scaffolding achieved significantly higher practice performance and in-practice
problem-solving efficiency compared to those without any scaffolding.
Furthermore, when given Parsons problems as scaffolding during practice,
students with lower CS self-efficacy were more likely to solve them. In
addition, students with higher pre-practice knowledge on the topic were more
likely to effectively use the Parsons scaffolding. This study provides evidence
for the benefits of using Parsons problems to scaffold students' write-code
activities. It also has implications for optimizing the Parsons scaffolding
experience for students, including providing personalized and adaptive Parsons
problems based on the student's current problem-solving status.
Related papers
- Automating Personalized Parsons Problems with Customized Contexts and Concepts [2.185263087861945]
Large language models (LLMs) may offer a solution by allowing students to produce on-demand Parsons problems.
In this paper, we introduce PuzzleMakerPy, an educational tool that uses an LLM to generate unlimited contextualized drag-and-drop programming exercises.
We evaluated PuzzleMakerPy by deploying it in a large introductory programming course, and found that the ability to personalize the contextual framing was highly engaging for students.
arXiv Detail & Related papers (2024-04-17T02:01:50Z) - CodeTailor: LLM-Powered Personalized Parsons Puzzles for Engaging Support While Learning Programming [6.43344619836303]
Generative AI can create a solution for most intro-level programming problems.
Students might use these tools to just generate code for them, resulting in reduced engagement and limited learning.
We present CodeTailor, a system that leverages a large language model (LLM) to provide personalized help to students.
arXiv Detail & Related papers (2024-01-22T17:08:54Z) - Integrating Personalized Parsons Problems with Multi-Level Textual
Explanations to Scaffold Code Writing [7.277912553209182]
Novice programmers need to write basic code as part of the learning process, but they often face difficulties.
To assist struggling students, we recently implemented personalized Parsons problems, where students arrange blocks of code to solve them as pop-up scaffolding.
Students found them to be more engaging and preferred them for learning, instead of simply receiving the correct answer.
arXiv Detail & Related papers (2024-01-06T07:27:46Z) - MOON: Assisting Students in Completing Educational Notebook Scenarios [0.0]
Notebooks come with many attractive features, such as the ability to combine textual explanations, multimedia content, and executable code.
This execution model can quickly become an issue when students do not follow the intended execution order of the teacher.
We present a novel approach, MOON, designed to remedy this problem.
arXiv Detail & Related papers (2023-09-28T06:49:30Z) - MathDial: A Dialogue Tutoring Dataset with Rich Pedagogical Properties
Grounded in Math Reasoning Problems [74.73881579517055]
We propose a framework to generate such dialogues by pairing human teachers with a Large Language Model prompted to represent common student errors.
We describe how we use this framework to collect MathDial, a dataset of 3k one-to-one teacher-student tutoring dialogues.
arXiv Detail & Related papers (2023-05-23T21:44:56Z) - Towards a Holistic Understanding of Mathematical Questions with
Contrastive Pre-training [65.10741459705739]
We propose a novel contrastive pre-training approach for mathematical question representations, namely QuesCo.
We first design two-level question augmentations, including content-level and structure-level, which generate literally diverse question pairs with similar purposes.
Then, to fully exploit hierarchical information of knowledge concepts, we propose a knowledge hierarchy-aware rank strategy.
arXiv Detail & Related papers (2023-01-18T14:23:29Z) - Fair and skill-diverse student group formation via constrained k-way
graph partitioning [65.29889537564455]
This work introduces an unsupervised algorithm for fair and skill-diverse student group formation.
The skill sets of students are determined using unsupervised dimensionality reduction of course mark data via the Laplacian eigenmap.
The problem is formulated as a constrained graph partitioning problem, whereby the diversity of skill sets in each group are maximised.
arXiv Detail & Related papers (2023-01-12T14:02:49Z) - Identifying Different Student Clusters in Functional Programming
Assignments: From Quick Learners to Struggling Students [2.0386745041807033]
We analyze student assignment submission data collected from a functional programming course taught at McGill university.
This allows us to identify four clusters of students: "Quick-learning", "Hardworking", "Satisficing", and "Struggling"
We then analyze how work habits, working duration, the range of errors, and the ability to fix errors impact different clusters of students.
arXiv Detail & Related papers (2023-01-06T17:15:58Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - The Influence of Domain-Based Preprocessing on Subject-Specific
Clustering [55.41644538483948]
The sudden change of moving the majority of teaching online at Universities has caused an increased amount of workload for academics.
One way to deal with this problem is to cluster these questions depending on their topic.
In this paper, we explore the realms of tagging data sets, focusing on identifying code excerpts and providing empirical results.
arXiv Detail & Related papers (2020-11-16T17:47:19Z) - Differentially Private Deep Learning with Smooth Sensitivity [144.31324628007403]
We study privacy concerns through the lens of differential privacy.
In this framework, privacy guarantees are generally obtained by perturbing models in such a way that specifics of data used to train the model are made ambiguous.
One of the most important techniques used in previous works involves an ensemble of teacher models, which return information to a student based on a noisy voting procedure.
In this work, we propose a novel voting mechanism with smooth sensitivity, which we call Immutable Noisy ArgMax, that, under certain conditions, can bear very large random noising from the teacher without affecting the useful information transferred to the student
arXiv Detail & Related papers (2020-03-01T15:38:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.