A systematic review of research on large language models for computer   programming education
        - URL: http://arxiv.org/abs/2506.21818v1
 - Date: Sun, 13 Apr 2025 20:13:45 GMT
 - Title: A systematic review of research on large language models for computer   programming education
 - Authors: Meina Zhu, Lanyu Xu, Barbara Ericson, 
 - Abstract summary: Large language models (LLMs) play a critical role in computer programming education.<n>This study provides a systematic review of selected empirical studies on LLMs in computer programming education.
 - Score: 0.0
 - License: http://creativecommons.org/licenses/by/4.0/
 - Abstract:   Given the increasing demands in computer programming education and the rapid advancement of large language models (LLMs), LLMs play a critical role in programming education. This study provides a systematic review of selected empirical studies on LLMs in computer programming education, published from 2023 to March 2024. The data for this review were collected from Web of Science (SCI/SSCI), SCOPUS, and EBSCOhost databases, as well as three conference proceedings specialized in computer programming education. In total, 42 studies met the selection criteria and were reviewed using methods, including bibliometric analysis, thematic analysis, and structural topic modeling. This study offers an overview of the current state of LLMs in computer programming education research. It outlines LLMs' applications, benefits, limitations, concerns, and implications for future research and practices, establishing connections between LLMs and their practical use in computer programming education. This review also provides examples and valuable insights for instructional designers, instructors, and learners. Additionally, a conceptual framework is proposed to guide education practitioners in integrating LLMs into computer programming education. This study suggests future research directions from various perspectives, emphasizing the need to expand research methods and topics in computer programming education as LLMs evolve. Additionally, future research in the field should incorporate collaborative, interdisciplinary, and transdisciplinary efforts on a large scale, focusing on longitudinal research and development initiatives. 
 
       
      
        Related papers
        - Inverse Reinforcement Learning Meets Large Language Model Post-Training:   Basics, Advances, and Opportunities [62.05713042908654]
This paper provides a review of advances in Large Language Models (LLMs) alignment through the lens of inverse reinforcement learning (IRL)<n>We highlight the necessity of constructing neural reward models from human data and discuss the formal and practical implications of this paradigm shift.
arXiv  Detail & Related papers  (2025-07-17T14:22:24Z) - On the Opportunities of Large Language Models for Programming Process   Data [6.023152721616896]
We discuss opportunities of using large language models for analyzing programming process data.
To complement our discussion, we outline a case study where we have leveraged LLMs for automatically summarizing the programming process.
arXiv  Detail & Related papers  (2024-11-01T07:20:01Z) - Large Language Models in Computer Science Education: A Systematic   Literature Review [7.240148550817106]
Large language models (LLMs) are becoming increasingly better at a wide range of Natural Language Processing tasks (NLP)
Recently, these models have extended their capabilities to coding tasks, bridging the gap between natural languages (NL) and programming languages (PL)
arXiv  Detail & Related papers  (2024-10-21T17:49:50Z) - Retrieval-Enhanced Machine Learning: Synthesis and Opportunities [60.34182805429511]
Retrieval-enhancement can be extended to a broader spectrum of machine learning (ML)
This work introduces a formal framework of this paradigm, Retrieval-Enhanced Machine Learning (REML), by synthesizing the literature in various domains in ML with consistent notations which is missing from the current literature.
The goal of this work is to equip researchers across various disciplines with a comprehensive, formally structured framework of retrieval-enhanced models, thereby fostering interdisciplinary future research.
arXiv  Detail & Related papers  (2024-07-17T20:01:21Z) - Analyzing LLM Usage in an Advanced Computing Class in India [4.580708389528142]
This study examines the use of large language models (LLMs) by undergraduate and graduate students for programming assignments in advanced computing classes.
We conducted a comprehensive analysis involving 411 students from a Distributed Systems class at an Indian university.
arXiv  Detail & Related papers  (2024-04-06T12:06:56Z) - CSEPrompts: A Benchmark of Introductory Computer Science Prompts [11.665831944836118]
Recent advances in AI, machine learning, and NLP have led to the development of a new generation of Large Language Models (LLMs)
Commercial applications have made this technology available to the general public, thus making it possible to use LLMs to produce high-quality texts for academic and professional purposes.
Schools and universities are aware of the increasing use of AI-generated content by students and they have been researching the impact of this new technology and its potential misuse.
arXiv  Detail & Related papers  (2024-04-03T07:55:57Z) - Large Language Models for Education: A Survey and Outlook [69.02214694865229]
We systematically review the technological advancements in each perspective, organize related datasets and benchmarks, and identify the risks and challenges associated with deploying LLMs in education.
Our survey aims to provide a comprehensive technological picture for educators, researchers, and policymakers to harness the power of LLMs to revolutionize educational practices and foster a more effective personalized learning environment.
arXiv  Detail & Related papers  (2024-03-26T21:04:29Z) - "Which LLM should I use?": Evaluating LLMs for tasks performed by   Undergraduate Computer Science Students [2.6043678412433713]
This study evaluates the effectiveness of large language models (LLMs) in performing tasks common among undergraduate computer science students.
Our research systematically assesses some of the publicly available LLMs such as Google Bard, ChatGPT(3.5), GitHub Copilot Chat, and Microsoft Copilot Chat.
arXiv  Detail & Related papers  (2024-01-22T15:11:36Z) - The Efficiency Spectrum of Large Language Models: An Algorithmic Survey [54.19942426544731]
The rapid growth of Large Language Models (LLMs) has been a driving force in transforming various domains.
This paper examines the multi-faceted dimensions of efficiency essential for the end-to-end algorithmic development of LLMs.
arXiv  Detail & Related papers  (2023-12-01T16:00:25Z) - Instruction Tuning for Large Language Models: A Survey [52.86322823501338]
We make a systematic review of the literature, including the general methodology of supervised fine-tuning (SFT)<n>We also review the potential pitfalls of SFT along with criticism against it, along with efforts pointing out current deficiencies of existing strategies.
arXiv  Detail & Related papers  (2023-08-21T15:35:16Z) - A Comprehensive Overview of Large Language Models [68.22178313875618]
Large Language Models (LLMs) have recently demonstrated remarkable capabilities in natural language processing tasks.
This article provides an overview of the existing literature on a broad range of LLM-related concepts.
arXiv  Detail & Related papers  (2023-07-12T20:01:52Z) - Hierarchical Programmatic Reinforcement Learning via Learning to Compose
  Programs [58.94569213396991]
We propose a hierarchical programmatic reinforcement learning framework to produce program policies.
By learning to compose programs, our proposed framework can produce program policies that describe out-of-distributionally complex behaviors.
The experimental results in the Karel domain show that our proposed framework outperforms baselines.
arXiv  Detail & Related papers  (2023-01-30T14:50:46Z) - Application of Artificial Intelligence and Machine Learning in
  Libraries: A Systematic Review [0.0]
The aim of this study is to provide a synthesis of empirical studies exploring application of artificial intelligence and machine learning in libraries.
Data was collected from Web of Science, Scopus, LISA and LISTA databases.
Findings show that the current state of the AI and ML research that is relevant with the LIS domain mainly focuses on theoretical works.
arXiv  Detail & Related papers  (2021-12-06T07:33:09Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.