Personalized Programming Guidance based on Deep Programming Learning Style Capturing
- URL: http://arxiv.org/abs/2403.14638v1
- Date: Tue, 20 Feb 2024 10:38:38 GMT
- Title: Personalized Programming Guidance based on Deep Programming Learning Style Capturing
- Authors: Yingfan Liu, Renyu Zhu, Ming Gao,
- Abstract summary: We propose a novel model called Programming Exercise Recommender with Learning Style (PERS)
PERS simulates learners' intricate programming behaviors.
We perform extensive experiments on two real-world datasets to verify the rationality of modeling programming learning styles.
- Score: 9.152344993023503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid development of big data and AI technology, programming is in high demand and has become an essential skill for students. Meanwhile, researchers also focus on boosting the online judging system's guidance ability to reduce students' dropout rates. Previous studies mainly targeted at enhancing learner engagement on online platforms by providing personalized recommendations. However, two significant challenges still need to be addressed in programming: C1) how to recognize complex programming behaviors; C2) how to capture intrinsic learning patterns that align with the actual learning process. To fill these gaps, in this paper, we propose a novel model called Programming Exercise Recommender with Learning Style (PERS), which simulates learners' intricate programming behaviors. Specifically, since programming is an iterative and trial-and-error process, we first introduce a positional encoding and a differentiating module to capture the changes of consecutive code submissions (which addresses C1). To better profile programming behaviors, we extend the Felder-Silverman learning style model, a classical pedagogical theory, to perceive intrinsic programming patterns. Based on this, we align three latent vectors to record and update programming ability, processing style, and understanding style, respectively (which addresses C2). We perform extensive experiments on two real-world datasets to verify the rationality of modeling programming learning styles and the effectiveness of PERS for personalized programming guidance.
Related papers
- Teaching Programming in the Age of Generative AI: Insights from Literature, Pedagogical Proposals, and Student Perspectives [0.0]
This article aims to review the most relevant studies on how programming content should be taught, learned, and assessed.<n>It proposes enriching teaching and learning methodologies by focusing on code comprehension and execution.<n>It advocates for the use of visual representations of code and visual simulations of its execution as effective tools for teaching, learning, and assessing programming.
arXiv Detail & Related papers (2025-06-30T17:38:27Z) - CoderAgent: Simulating Student Behavior for Personalized Programming Learning with Large Language Models [34.62411261398559]
We propose a LLM-based agent, CoderAgent, to simulate students' programming processes in a fine-grained manner without relying on real data.<n>Specifically, we equip each human learner with an intelligent agent, the core of which lies in capturing the cognitive states of the human programming practice process.
arXiv Detail & Related papers (2025-05-27T02:43:38Z) - InnateCoder: Learning Programmatic Options with Foundation Models [13.218260503808056]
InnateCoder is a system that leverages human knowledge encoded in foundation models to provide programmatic policies.<n>In contrast to existing approaches to learning options, InnateCoder learns them from the general human knowledge encoded in foundation models in a zero-shot setting.<n>We show that InnateCoder is more sample efficient than versions of the system that do not use options or learn them from experience.
arXiv Detail & Related papers (2025-05-18T17:57:57Z) - DiSciPLE: Learning Interpretable Programs for Scientific Visual Discovery [61.02102713094486]
Good interpretation is important in scientific reasoning, as it allows for better decision-making.<n>This paper introduces an automatic way of obtaining such interpretable-by-design models, by learning programs that interleave neural networks.<n>We propose DiSciPLE an evolutionary algorithm that leverages common sense and prior knowledge of large language models (LLMs) to create Python programs explaining visual data.
arXiv Detail & Related papers (2025-02-14T10:26:14Z) - Dynamic Skill Adaptation for Large Language Models [78.31322532135272]
We present Dynamic Skill Adaptation (DSA), an adaptive and dynamic framework to adapt novel and complex skills to Large Language Models (LLMs)
For every skill, we utilize LLMs to generate both textbook-like data which contains detailed descriptions of skills for pre-training and exercise-like data which targets at explicitly utilizing the skills to solve problems for instruction-tuning.
Experiments on large language models such as LLAMA and Mistral demonstrate the effectiveness of our proposed methods in adapting math reasoning skills and social study skills.
arXiv Detail & Related papers (2024-12-26T22:04:23Z) - Evaluating Contextually Personalized Programming Exercises Created with Generative AI [4.046163999707179]
This article reports on a user study conducted in an elective programming course that included contextually personalized programming exercises created with GPT-4.
The results demonstrate that the quality of exercises generated with GPT-4 was generally high.
This suggests that AI-generated programming problems can be a worthwhile addition to introductory programming courses.
arXiv Detail & Related papers (2024-06-11T12:59:52Z) - CodeGRAG: Bridging the Gap between Natural Language and Programming Language via Graphical Retrieval Augmented Generation [58.84212778960507]
We propose CodeGRAG, a Graphical Retrieval Augmented Code Generation framework to enhance the performance of LLMs.
CodeGRAG builds the graphical view of code blocks based on the control flow and data flow of them to fill the gap between programming languages and natural language.
Various experiments and ablations are done on four datasets including both the C++ and python languages to validate the hard meta-graph prompt, the soft prompting technique, and the effectiveness of the objectives for pretrained GNN expert.
arXiv Detail & Related papers (2024-05-03T02:48:55Z) - PwR: Exploring the Role of Representations in Conversational Programming [17.838776812138626]
We introduce Programming with Representations (PwR), an approach that uses representations to convey the system's understanding back to the user in natural language.
We find that representations significantly improve understandability, and instilled a sense of agency among our participants.
arXiv Detail & Related papers (2023-09-18T05:38:23Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning [64.55001982176226]
LIBERO is a novel benchmark of lifelong learning for robot manipulation.
We focus on how to efficiently transfer declarative knowledge, procedural knowledge, or the mixture of both.
We develop an extendible procedural generation pipeline that can in principle generate infinitely many tasks.
arXiv Detail & Related papers (2023-06-05T23:32:26Z) - Programming Knowledge Tracing: A Comprehensive Dataset and A New Model [26.63441910982382]
We propose a new model PDKT to exploit the enriched context for accurate student behavior prediction.
We construct a bipartite graph for programming problem embedding, and design an improved pre-training model PLCodeBERT for code embedding.
Experimental results on the new dataset BePKT show that our proposed model establishes state-of-the-art performance in programming knowledge tracing.
arXiv Detail & Related papers (2021-12-11T02:13:11Z) - Learning Multi-Objective Curricula for Deep Reinforcement Learning [55.27879754113767]
Various automatic curriculum learning (ACL) methods have been proposed to improve the sample efficiency and final performance of deep reinforcement learning (DRL)
In this paper, we propose a unified automatic curriculum learning framework to create multi-objective but coherent curricula.
In addition to existing hand-designed curricula paradigms, we further design a flexible memory mechanism to learn an abstract curriculum.
arXiv Detail & Related papers (2021-10-06T19:30:25Z) - Learning compositional programs with arguments and sampling [12.790055619773565]
We train a machine learning model to discover a program that satisfies specific requirements.
We extend upon a state of the art model, AlphaNPI, by learning to generate functions that can accept arguments.
arXiv Detail & Related papers (2021-09-01T21:27:41Z) - Learning to Synthesize Programs as Interpretable and Generalizable
Policies [25.258598215642067]
We present a framework that learns to synthesize a program, which details the procedure to solve a task in a flexible and expressive manner.
Experimental results demonstrate that the proposed framework not only learns to reliably synthesize task-solving programs but also outperforms DRL and program synthesis baselines.
arXiv Detail & Related papers (2021-08-31T07:03:06Z) - How could Neural Networks understand Programs? [67.4217527949013]
It is difficult to build a model to better understand programs, by either directly applying off-the-shelf NLP pre-training techniques to the source code, or adding features to the model by theshelf.
We propose a novel program semantics learning paradigm, that the model should learn from information composed of (1) the representations which align well with the fundamental operations in operational semantics, and (2) the information of environment transition.
arXiv Detail & Related papers (2021-05-10T12:21:42Z) - Learning Compositional Neural Programs for Continuous Control [62.80551956557359]
We propose a novel solution to challenging sparse-reward, continuous control problems.
Our solution, dubbed AlphaNPI-X, involves three separate stages of learning.
We empirically show that AlphaNPI-X can effectively learn to tackle challenging sparse manipulation tasks.
arXiv Detail & Related papers (2020-07-27T08:27:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.