Investigating the Use of Productive Failure as a Design Paradigm for Learning Introductory Python Programming
- URL: http://arxiv.org/abs/2411.11227v1
- Date: Mon, 18 Nov 2024 01:39:05 GMT
- Title: Investigating the Use of Productive Failure as a Design Paradigm for Learning Introductory Python Programming
- Authors: Hussel Suriyaarachchi, Paul Denny, Suranga Nanayakkara,
- Abstract summary: Productive Failure (PF) is a learning approach where students tackle novel problems targeting concepts they have not yet learned, followed by a consolidation phase where these concepts are taught.
Recent application in STEM disciplines suggests that PF can help learners develop more robust conceptual knowledge.
We designed a novel PF-based learning activity that incorporated the unobtrusive collection of real-time heart-rate data from consumer-grade wearable sensors.
We found that although there was no difference in initial learning outcomes between the groups, students who followed the PF approach showed better knowledge retention and performance on delayed but similar tasks.
- Score: 7.8163934921246945
- License:
- Abstract: Productive Failure (PF) is a learning approach where students initially tackle novel problems targeting concepts they have not yet learned, followed by a consolidation phase where these concepts are taught. Recent application in STEM disciplines suggests that PF can help learners develop more robust conceptual knowledge. However, empirical validation of PF for programming education remains under-explored. In this paper, we investigate the use of PF to teach Python lists to undergraduate students with limited prior programming experience. We designed a novel PF-based learning activity that incorporated the unobtrusive collection of real-time heart-rate data from consumer-grade wearable sensors. This sensor data was used both to make the learning activity engaging and to infer cognitive load. We evaluated our approach with 20 participants, half of whom were taught Python concepts using Direct Instruction (DI), and the other half with PF. We found that although there was no difference in initial learning outcomes between the groups, students who followed the PF approach showed better knowledge retention and performance on delayed but similar tasks. In addition, physiological measurements indicated that these students also exhibited a larger decrease in cognitive load during their tasks after instruction. Our findings suggest that PF-based approaches may lead to more robust learning, and that future work should investigate similar activities at scale across a range of concepts.
Related papers
- Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces [34.00971641141313]
"Unlearning" certain concepts in large language models (LLMs) has attracted immense attention recently.
Current protocols to evaluate unlearning methods rely on behavioral tests, without monitoring the presence of associated knowledge.
We argue that unlearning should also be evaluated internally, by considering changes in the parametric knowledge traces of the unlearned concepts.
arXiv Detail & Related papers (2024-06-17T15:00:35Z) - Personalized Forgetting Mechanism with Concept-Driven Knowledge Tracing [16.354428270912138]
We propose a Concept-driven Personalized Forgetting knowledge tracing model (CPF)
CPF integrates hierarchical relationships between knowledge concepts and incorporates students' personalized cognitive abilities.
Our CPF outperforms current forgetting curve theory based methods in predicting student performance.
arXiv Detail & Related papers (2024-04-18T12:28:50Z) - Continual Learning with Pre-Trained Models: A Survey [61.97613090666247]
Continual Learning aims to overcome the catastrophic forgetting of former knowledge when learning new ones.
This paper presents a comprehensive survey of the latest advancements in PTM-based CL.
arXiv Detail & Related papers (2024-01-29T18:27:52Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - When Meta-Learning Meets Online and Continual Learning: A Survey [39.53836535326121]
meta-learning is a data-driven approach to optimize the learning algorithm.
Continual learning and online learning, both of which involve incrementally updating a model with streaming data.
This paper organizes various problem settings using consistent terminology and formal descriptions.
arXiv Detail & Related papers (2023-11-09T09:49:50Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Federated Unlearning via Active Forgetting [24.060724751342047]
We propose a novel federated unlearning framework based on incremental learning.
Our framework differs from existing federated unlearning methods that rely on approximate retraining or data influence estimation.
arXiv Detail & Related papers (2023-07-07T03:07:26Z) - Set-to-Sequence Ranking-based Concept-aware Learning Path Recommendation [49.85548436111153]
We propose a novel framework named Set-to-Sequence Ranking-based Concept-aware Learning Path Recommendation (SRC)
SRC formulates the recommendation task under a set-to-sequence paradigm.
We conduct extensive experiments on two real-world public datasets and one industrial dataset.
arXiv Detail & Related papers (2023-06-07T08:24:44Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - pymdp: A Python library for active inference in discrete state spaces [52.85819390191516]
pymdp is an open-source package for simulating active inference in Python.
We provide the first open-source package for simulating active inference with POMDPs.
arXiv Detail & Related papers (2022-01-11T12:18:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.