Automatically Generating CS Learning Materials with Large Language
Models
- URL: http://arxiv.org/abs/2212.05113v1
- Date: Fri, 9 Dec 2022 20:37:44 GMT
- Title: Automatically Generating CS Learning Materials with Large Language
Models
- Authors: Stephen MacNeil, Andrew Tran, Juho Leinonen, Paul Denny, Joanne Kim,
Arto Hellas, Seth Bernstein, Sami Sarsa
- Abstract summary: Large Language Models (LLMs) enable software developers to generate code based on a natural language prompt.
LLMs may enable students to interact with code in new ways while helping instructors scale their learning materials.
LLMs also introduce new implications for academic integrity, curriculum design, and software engineering careers.
- Score: 4.526618922750769
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent breakthroughs in Large Language Models (LLMs), such as GPT-3 and
Codex, now enable software developers to generate code based on a natural
language prompt. Within computer science education, researchers are exploring
the potential for LLMs to generate code explanations and programming
assignments using carefully crafted prompts. These advances may enable students
to interact with code in new ways while helping instructors scale their
learning materials. However, LLMs also introduce new implications for academic
integrity, curriculum design, and software engineering careers. This workshop
will demonstrate the capabilities of LLMs to help attendees evaluate whether
and how LLMs might be integrated into their pedagogy and research. We will also
engage attendees in brainstorming to consider how LLMs will impact our field.
Related papers
- Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing [56.75702900542643]
We introduce AlphaLLM for the self-improvements of Large Language Models.
It integrates Monte Carlo Tree Search (MCTS) with LLMs to establish a self-improving loop.
Our experimental results show that AlphaLLM significantly enhances the performance of LLMs without additional annotations.
arXiv Detail & Related papers (2024-04-18T15:21:34Z) - Let's Ask AI About Their Programs: Exploring ChatGPT's Answers To Program Comprehension Questions [2.377308748205625]
We explore the capability of the state-of-the-art LLMs in answering QLCs that are generated from code that the LLMs have created.
Our results show that although the state-of-the-art LLMs can create programs and trace program execution when prompted, they easily succumb to similar errors that have previously been recorded for novice programmers.
arXiv Detail & Related papers (2024-04-17T20:37:00Z) - CS1-LLM: Integrating LLMs into CS1 Instruction [0.6282171844772422]
This experience report describes a CS1 course at a large research-intensive university that fully embraces the use of Large Language Models.
To incorporate the LLMs, the course was intentionally altered to reduce emphasis on syntax and writing code from scratch.
Students were given three large, open-ended projects in three separate domains that allowed them to showcase their creativity.
arXiv Detail & Related papers (2024-04-17T14:44:28Z) - CSEPrompts: A Benchmark of Introductory Computer Science Prompts [11.665831944836118]
Recent advances in AI, machine learning, and NLP have led to the development of a new generation of Large Language Models (LLMs)
Commercial applications have made this technology available to the general public, thus making it possible to use LLMs to produce high-quality texts for academic and professional purposes.
Schools and universities are aware of the increasing use of AI-generated content by students and they have been researching the impact of this new technology and its potential misuse.
arXiv Detail & Related papers (2024-04-03T07:55:57Z) - An Exploratory Study on Upper-Level Computing Students' Use of Large Language Models as Tools in a Semester-Long Project [2.7325338323814328]
The purpose of this study is to explore computing students' experiences and approaches to using LLMs during a semester-long software engineering project.
We collected data from a senior-level software engineering course at Purdue University.
We analyzed the data to identify themes related to students' usage patterns and learning outcomes.
arXiv Detail & Related papers (2024-03-27T15:21:58Z) - Rethinking Interpretability in the Era of Large Language Models [76.1947554386879]
Large language models (LLMs) have demonstrated remarkable capabilities across a wide array of tasks.
The capability to explain in natural language allows LLMs to expand the scale and complexity of patterns that can be given to a human.
These new capabilities raise new challenges, such as hallucinated explanations and immense computational costs.
arXiv Detail & Related papers (2024-01-30T17:38:54Z) - An Empirical Study on Usage and Perceptions of LLMs in a Software
Engineering Project [1.433758865948252]
Large Language Models (LLMs) represent a leap in artificial intelligence, excelling in tasks using human language(s)
In this paper, we analyze the AI-generated code, prompts used for code generation, and the human intervention levels to integrate the code into the code base.
Our findings suggest that LLMs can play a crucial role in the early stages of software development.
arXiv Detail & Related papers (2024-01-29T14:32:32Z) - If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code
Empowers Large Language Models to Serve as Intelligent Agents [81.60906807941188]
Large language models (LLMs) are trained on a combination of natural language and formal language (code)
Code translates high-level goals into executable steps, featuring standard syntax, logical consistency, abstraction, and modularity.
arXiv Detail & Related papers (2024-01-01T16:51:20Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z) - Democratizing Reasoning Ability: Tailored Learning from Large Language
Model [97.4921006089966]
We propose a tailored learning approach to distill such reasoning ability to smaller LMs.
We exploit the potential of LLM as a reasoning teacher by building an interactive multi-round learning paradigm.
To exploit the reasoning potential of the smaller LM, we propose self-reflection learning to motivate the student to learn from self-made mistakes.
arXiv Detail & Related papers (2023-10-20T07:50:10Z) - Okapi: Instruction-tuned Large Language Models in Multiple Languages
with Reinforcement Learning from Human Feedback [61.83548032416181]
We present Okapi, the first system with instruction-tuned LLMs based on RLHF for multiple languages.
Okapi introduces instruction and response-ranked data in 26 diverse languages to facilitate the experiments and development of future multilingual LLM research.
arXiv Detail & Related papers (2023-07-29T18:01:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.