Steps Before Syntax: Helping Novice Programmers Solve Problems using the
PCDIT Framework
- URL: http://arxiv.org/abs/2109.08896v1
- Date: Sat, 18 Sep 2021 10:31:15 GMT
- Title: Steps Before Syntax: Helping Novice Programmers Solve Problems using the
PCDIT Framework
- Authors: Oka Kurniawan, Cyrille J\'egourel, Norman Tiong Seng Lee, Matthieu De
Mari, Christopher M. Poskitt
- Abstract summary: Novice programmers often struggle with problem solving due to the high cognitive loads they face.
Many introductory programming courses do not explicitly teach it, assuming that problem solving skills are acquired along the way.
We present 'PCDIT', a non-linear problem solving framework that provides scaffolding to guide novice programmers through the process of transforming a problem specification into an implemented and tested solution for an imperative programming language.
- Score: 2.768397481213625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Novice programmers often struggle with problem solving due to the high
cognitive loads they face. Furthermore, many introductory programming courses
do not explicitly teach it, assuming that problem solving skills are acquired
along the way. In this paper, we present 'PCDIT', a non-linear problem solving
framework that provides scaffolding to guide novice programmers through the
process of transforming a problem specification into an implemented and tested
solution for an imperative programming language. A key distinction of PCDIT is
its focus on developing concrete cases for the problem early without actually
writing test code: students are instead encouraged to think about the abstract
steps from inputs to outputs before mapping anything down to syntax. We reflect
on our experience of teaching an introductory programming course using PCDIT,
and report the results of a survey that suggests it helped students to break
down challenging problems, organise their thoughts, and reach working
solutions.
Related papers
- ChatGPT as a Solver and Grader of Programming Exams written in Spanish [3.8984586307450093]
We assess ChatGPT's capacities to solve and grade real programming exams.
Our findings suggest that this AI model is only effective for solving simple coding tasks.
Its proficiency in tackling complex problems or evaluating solutions authored by others are far from effective.
arXiv Detail & Related papers (2024-09-23T15:20:07Z) - Estimating Difficulty Levels of Programming Problems with Pre-trained Model [18.92661958433282]
The difficulty level of each programming problem serves as an essential reference for guiding students' adaptive learning.
We formulate the problem of automatic difficulty level estimation of each programming problem, given its textual description and a solution example of code.
For tackling this problem, we propose to couple two pre-trained models, one for text modality and the other for code modality, into a unified model.
arXiv Detail & Related papers (2024-06-13T05:38:20Z) - A Knowledge-Component-Based Methodology for Evaluating AI Assistants [9.412070852474313]
We evaluate an automatic hint generator for CS1 programming assignments powered by GPT-4.
This system provides natural language guidance about how students can improve their incorrect solutions to short programming exercises.
arXiv Detail & Related papers (2024-06-09T00:58:39Z) - Probeable Problems for Beginner-level Programming-with-AI Contests [0.0]
We conduct a 2-hour programming contest for undergraduate Computer Science students from multiple institutions.
Students were permitted to work individually or in groups, and were free to use AI tools.
We analyze the extent to which the code submitted by these groups identifies missing details and identify ways in which Probeable Problems can support learning in formal and informal CS educational contexts.
arXiv Detail & Related papers (2024-05-24T00:39:32Z) - Distilling Algorithmic Reasoning from LLMs via Explaining Solution Programs [2.3020018305241337]
Distilling explicit chain-of-thought reasoning paths has emerged as an effective method for improving the reasoning abilities of large language models.
We propose a novel approach to distill reasoning abilities from LLMs by leveraging their capacity to explain solutions.
Our experiments demonstrate that learning from explanations enables the Reasoner to more effectively guide program implementation by a Coder.
arXiv Detail & Related papers (2024-04-11T22:19:50Z) - Boosting of Thoughts: Trial-and-Error Problem Solving with Large
Language Models [48.43678591317425]
Boosting of Thoughts (BoT) is an automated prompting framework for problem solving with Large Language Models.
We show that BoT consistently achieves higher or comparable problem-solving rates than other advanced prompting approaches.
arXiv Detail & Related papers (2024-02-17T00:13:36Z) - Large Language Models as Analogical Reasoners [155.9617224350088]
Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks.
We introduce a new prompting approach, analogical prompting, designed to automatically guide the reasoning process of large language models.
arXiv Detail & Related papers (2023-10-03T00:57:26Z) - Evaluating and Improving Tool-Augmented Computation-Intensive Math
Reasoning [75.74103236299477]
Chain-of-thought prompting(CoT) and tool augmentation have been validated as effective practices for improving large language models.
We propose a new approach that can deliberate the reasoning steps with tool interfaces, namely textbfDELI.
Experimental results on CARP and six other datasets show that the proposed DELI mostly outperforms competitive baselines.
arXiv Detail & Related papers (2023-06-04T17:02:59Z) - JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem
Understanding [74.12405417718054]
This paper aims to advance the mathematical intelligence of machines by presenting the first Chinese mathematical pre-trained language model(PLM)
Unlike other standard NLP tasks, mathematical texts are difficult to understand, since they involve mathematical terminology, symbols and formulas in the problem statement.
We design a novel curriculum pre-training approach for improving the learning of mathematical PLMs, consisting of both basic and advanced courses.
arXiv Detail & Related papers (2022-06-13T17:03:52Z) - Machine Learning Methods in Solving the Boolean Satisfiability Problem [72.21206588430645]
The paper reviews the recent literature on solving the Boolean satisfiability problem (SAT) with machine learning techniques.
We examine the evolving ML-SAT solvers from naive classifiers with handcrafted features to the emerging end-to-end SAT solvers such as NeuroSAT.
arXiv Detail & Related papers (2022-03-02T05:14:12Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.