Integrating Personalized Parsons Problems with Multi-Level Textual
Explanations to Scaffold Code Writing
- URL: http://arxiv.org/abs/2401.03144v2
- Date: Thu, 11 Jan 2024 09:29:03 GMT
- Title: Integrating Personalized Parsons Problems with Multi-Level Textual
Explanations to Scaffold Code Writing
- Authors: Xinying Hou, Barbara J. Ericson, Xu Wang
- Abstract summary: Novice programmers need to write basic code as part of the learning process, but they often face difficulties.
To assist struggling students, we recently implemented personalized Parsons problems, where students arrange blocks of code to solve them as pop-up scaffolding.
Students found them to be more engaging and preferred them for learning, instead of simply receiving the correct answer.
- Score: 7.277912553209182
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Novice programmers need to write basic code as part of the learning process,
but they often face difficulties. To assist struggling students, we recently
implemented personalized Parsons problems, which are code puzzles where
students arrange blocks of code to solve them, as pop-up scaffolding. Students
found them to be more engaging and preferred them for learning, instead of
simply receiving the correct answer, such as the response they might get from
generative AI tools like ChatGPT. However, a drawback of using Parsons problems
as scaffolding is that students may be able to put the code blocks in the
correct order without fully understanding the rationale of the correct
solution. As a result, the learning benefits of scaffolding are compromised.
Can we improve the understanding of personalized Parsons scaffolding by
providing textual code explanations? In this poster, we propose a design that
incorporates multiple levels of textual explanations for the Parsons problems.
This design will be used for future technical evaluations and classroom
experiments. These experiments will explore the effectiveness of adding textual
explanations to Parsons problems to improve instructional benefits.
Related papers
- Automating Personalized Parsons Problems with Customized Contexts and Concepts [2.185263087861945]
Large language models (LLMs) may offer a solution by allowing students to produce on-demand Parsons problems.
In this paper, we introduce PuzzleMakerPy, an educational tool that uses an LLM to generate unlimited contextualized drag-and-drop programming exercises.
We evaluated PuzzleMakerPy by deploying it in a large introductory programming course, and found that the ability to personalize the contextual framing was highly engaging for students.
arXiv Detail & Related papers (2024-04-17T02:01:50Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - CodeTailor: LLM-Powered Personalized Parsons Puzzles for Engaging Support While Learning Programming [6.43344619836303]
Generative AI can create a solution for most intro-level programming problems.
Students might use these tools to just generate code for them, resulting in reduced engagement and limited learning.
We present CodeTailor, a system that leverages a large language model (LLM) to provide personalized help to students.
arXiv Detail & Related papers (2024-01-22T17:08:54Z) - Understanding the Effects of Using Parsons Problems to Scaffold Code
Writing for Students with Varying CS Self-Efficacy Levels [7.277912553209182]
We investigated the impact of using Parsons problems as a code-writing scaffold for students with varying levels of CS self-efficacy.
For students with low CS self-efficacy levels, those who received scaffolding achieved significantly higher practice performance and in-practice problem-solving efficiency.
Students with higher pre-practice knowledge on the topic were more likely to effectively use the Parsons scaffolding.
arXiv Detail & Related papers (2023-11-29T22:02:46Z) - More Robots are Coming: Large Multimodal Models (ChatGPT) can Solve
Visually Diverse Images of Parsons Problems [0.4660328753262075]
We evaluate the performance of two large multimodal models on visual assignments.
GPT-4V solved 96.7% of these visual problems, struggling minimally with a single Parsons problem.
Bard performed poorly by only solving 69.2% of problems, struggling with common issues like hallucinations and refusals.
arXiv Detail & Related papers (2023-11-03T14:47:17Z) - MathDial: A Dialogue Tutoring Dataset with Rich Pedagogical Properties
Grounded in Math Reasoning Problems [74.73881579517055]
We propose a framework to generate such dialogues by pairing human teachers with a Large Language Model prompted to represent common student errors.
We describe how we use this framework to collect MathDial, a dataset of 3k one-to-one teacher-student tutoring dialogues.
arXiv Detail & Related papers (2023-05-23T21:44:56Z) - Adaptive Scaffolding in Block-Based Programming via Synthesizing New
Tasks as Pop Quizzes [30.127552292093384]
We introduce a scaffolding framework based on pop quizzes presented as multi-choice programming tasks.
To automatically generate these pop quizzes, we propose a novel algorithm, PQuizSyn.
Our algorithm synthesizes new tasks for pop quizzes with the following features: (a) Adaptive (i.e., individualized to the student's current attempt), (b) Comprehensible (i.e., easy to comprehend and solve), and (c) Concealing, do not reveal the solution code.
arXiv Detail & Related papers (2023-03-28T23:52:15Z) - Towards a Holistic Understanding of Mathematical Questions with
Contrastive Pre-training [65.10741459705739]
We propose a novel contrastive pre-training approach for mathematical question representations, namely QuesCo.
We first design two-level question augmentations, including content-level and structure-level, which generate literally diverse question pairs with similar purposes.
Then, to fully exploit hierarchical information of knowledge concepts, we propose a knowledge hierarchy-aware rank strategy.
arXiv Detail & Related papers (2023-01-18T14:23:29Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Dive into Deep Learning [119.30375933463156]
The book is drafted in Jupyter notebooks, seamlessly integrating exposition figures, math, and interactive examples with self-contained code.
Our goal is to offer a resource that could (i) be freely available for everyone; (ii) offer sufficient technical depth to provide a starting point on the path to becoming an applied machine learning scientist; (iii) include runnable code, showing readers how to solve problems in practice; (iv) allow for rapid updates, both by us and also by the community at large.
arXiv Detail & Related papers (2021-06-21T18:19:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.