Computing Education in the Era of Generative AI
- URL: http://arxiv.org/abs/2306.02608v1
- Date: Mon, 5 Jun 2023 05:43:35 GMT
- Title: Computing Education in the Era of Generative AI
- Authors: Paul Denny and James Prather and Brett A. Becker and James
Finnie-Ansley and Arto Hellas and Juho Leinonen and Andrew Luxton-Reilly and
Brent N. Reeves and Eddie Antonio Santos and Sami Sarsa
- Abstract summary: Recent advances in artificial intelligence have resulted in code generation models that can produce source code from natural language problem descriptions.
We discuss the challenges and opportunities such models present to computing educators.
We consider likely impacts of such models upon pedagogical practice in the context of the most recent advances at the time of writing.
- Score: 6.058132379003054
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The computing education community has a rich history of pedagogical
innovation designed to support students in introductory courses, and to support
teachers in facilitating student learning. Very recent advances in artificial
intelligence have resulted in code generation models that can produce source
code from natural language problem descriptions -- with impressive accuracy in
many cases. The wide availability of these models and their ease of use has
raised concerns about potential impacts on many aspects of society, including
the future of computing education. In this paper, we discuss the challenges and
opportunities such models present to computing educators, with a focus on
introductory programming classrooms. We summarize the results of two recent
articles, the first evaluating the performance of code generation models on
typical introductory-level programming problems, and the second exploring the
quality and novelty of learning resources generated by these models. We
consider likely impacts of such models upon pedagogical practice in the context
of the most recent advances at the time of writing.
Related papers
- Deep Generative Models in Robotics: A Survey on Learning from Multimodal Demonstrations [52.11801730860999]
In recent years, the robot learning community has shown increasing interest in using deep generative models to capture the complexity of large datasets.
We present the different types of models that the community has explored, such as energy-based models, diffusion models, action value maps, or generative adversarial networks.
We also present the different types of applications in which deep generative models have been used, from grasp generation to trajectory generation or cost learning.
arXiv Detail & Related papers (2024-08-08T11:34:31Z) - A review on the use of large language models as virtual tutors [5.014059576916173]
Large Language Models (LLMs) have produced a huge buzz in several fields and industrial sectors.
This review seeks to provide a comprehensive overview of those solutions designed specifically to generate and evaluate educational materials.
As expected, the most common role of these systems is as virtual tutors for automatic question generation.
arXiv Detail & Related papers (2024-05-20T12:33:42Z) - Generative Artificial Intelligence: A Systematic Review and Applications [7.729155237285151]
This paper documents the systematic review and analysis of recent advancements and techniques in Generative AI.
The major impact that generative AI has made to date, has been in language generation with the development of large language models.
The paper ends with a discussion of Responsible AI principles, and the necessary ethical considerations for the sustainability and growth of these generative models.
arXiv Detail & Related papers (2024-05-17T18:03:59Z) - On the Challenges and Opportunities in Generative AI [135.2754367149689]
We argue that current large-scale generative AI models do not sufficiently address several fundamental issues that hinder their widespread adoption across domains.
In this work, we aim to identify key unresolved challenges in modern generative AI paradigms that should be tackled to further enhance their capabilities, versatility, and reliability.
arXiv Detail & Related papers (2024-02-28T15:19:33Z) - INSTRUCTEVAL: Towards Holistic Evaluation of Instruction-Tuned Large
Language Models [39.46610170563634]
INSTRUCTEVAL is a more comprehensive evaluation suite designed specifically for instruction-tuned large language models.
We take a holistic approach to analyze various factors affecting model performance, including the pretraining foundation, instruction-tuning data, and training methods.
Our findings reveal that the quality of instruction data is the most crucial factor in scaling model performance.
arXiv Detail & Related papers (2023-06-07T20:12:29Z) - Opportunities and Challenges in Neural Dialog Tutoring [54.07241332881601]
We rigorously analyze various generative language models on two dialog tutoring datasets for language learning.
We find that although current approaches can model tutoring in constrained learning scenarios, they perform poorly in less constrained scenarios.
Our human quality evaluation shows that both models and ground-truth annotations exhibit low performance in terms of equitable tutoring.
arXiv Detail & Related papers (2023-01-24T11:00:17Z) - A Survey of Deep Learning for Mathematical Reasoning [71.88150173381153]
We review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade.
Recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning.
arXiv Detail & Related papers (2022-12-20T18:46:16Z) - Deep Active Learning for Computer Vision: Past and Future [50.19394935978135]
Despite its indispensable role for developing AI models, research on active learning is not as intensive as other research directions.
By addressing data automation challenges and coping with automated machine learning systems, active learning will facilitate democratization of AI technologies.
arXiv Detail & Related papers (2022-11-27T13:07:14Z) - Learnware: Small Models Do Big [69.88234743773113]
The prevailing big model paradigm, which has achieved impressive results in natural language processing and computer vision applications, has not yet addressed those issues, whereas becoming a serious source of carbon emissions.
This article offers an overview of the learnware paradigm, which attempts to enable users not need to build machine learning models from scratch, with the hope of reusing small models to do things even beyond their original purposes.
arXiv Detail & Related papers (2022-10-07T15:55:52Z) - Automatic Generation of Programming Exercises and Code Explanations with
Large Language Models [4.947560475228859]
OpenAI Codex is a recent large language model from the GPT-3 family for translating code into natural language.
We explore the natural language generation capabilities of Codex in two different phases of the life of a programming exercise.
We find the majority of this automatically generated content both novel and sensible, and in many cases ready to use as is.
arXiv Detail & Related papers (2022-06-03T11:00:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.