An Empirical Study on Usage and Perceptions of LLMs in a Software
Engineering Project
- URL: http://arxiv.org/abs/2401.16186v1
- Date: Mon, 29 Jan 2024 14:32:32 GMT
- Title: An Empirical Study on Usage and Perceptions of LLMs in a Software
Engineering Project
- Authors: Sanka Rasnayaka, Guanlin Wang, Ridwan Shariffdeen, Ganesh Neelakanta
Iyer
- Abstract summary: Large Language Models (LLMs) represent a leap in artificial intelligence, excelling in tasks using human language(s)
In this paper, we analyze the AI-generated code, prompts used for code generation, and the human intervention levels to integrate the code into the code base.
Our findings suggest that LLMs can play a crucial role in the early stages of software development.
- Score: 1.433758865948252
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) represent a leap in artificial intelligence,
excelling in tasks using human language(s). Although the main focus of
general-purpose LLMs is not code generation, they have shown promising results
in the domain. However, the usefulness of LLMs in an academic software
engineering project has not been fully explored yet. In this study, we explore
the usefulness of LLMs for 214 students working in teams consisting of up to
six members. Notably, in the academic course through which this study is
conducted, students were encouraged to integrate LLMs into their development
tool-chain, in contrast to most other academic courses that explicitly prohibit
the use of LLMs.
In this paper, we analyze the AI-generated code, prompts used for code
generation, and the human intervention levels to integrate the code into the
code base. We also conduct a perception study to gain insights into the
perceived usefulness, influencing factors, and future outlook of LLM from a
computer science student's perspective. Our findings suggest that LLMs can play
a crucial role in the early stages of software development, especially in
generating foundational code structures, and helping with syntax and error
debugging. These insights provide us with a framework on how to effectively
utilize LLMs as a tool to enhance the productivity of software engineering
students, and highlight the necessity of shifting the educational focus toward
preparing students for successful human-AI collaboration.
Related papers
- From Selection to Generation: A Survey of LLM-based Active Learning [153.8110509961261]
Large Language Models (LLMs) have been employed for generating entirely new data instances and providing more cost-effective annotations.
This survey aims to serve as an up-to-date resource for researchers and practitioners seeking to gain an intuitive understanding of LLM-based AL techniques.
arXiv Detail & Related papers (2025-02-17T12:58:17Z) - Position: LLMs Can be Good Tutors in Foreign Language Education [87.88557755407815]
We argue that large language models (LLMs) have the potential to serve as effective tutors in foreign language education (FLE)
Specifically, LLMs can play three critical roles: (1) as data enhancers, improving the creation of learning materials or serving as student simulations; (2) as task predictors, serving as learner assessment or optimizing learning pathway; and (3) as agents, enabling personalized and inclusive education.
arXiv Detail & Related papers (2025-02-08T06:48:49Z) - Analysis of Student-LLM Interaction in a Software Engineering Project [1.2233362977312945]
We analyze 126 undergraduate students' interaction with an AI assistant during a 13-week semester to understand the benefits of AI for software engineering learning.
Our findings suggest that students prefer ChatGPT over CoPilot.
conversational-based interaction helps improve the quality of the code generated compared to auto-generated code.
arXiv Detail & Related papers (2025-02-03T11:44:00Z) - From LLMs to LLM-based Agents for Software Engineering: A Survey of Current, Challenges and Future [15.568939568441317]
We investigate the current practice and solutions for large language models (LLMs) and LLM-based agents for software engineering.
In particular we summarise six key topics: requirement engineering, code generation, autonomous decision-making, software design, test generation, and software maintenance.
We discuss the models and benchmarks used, providing a comprehensive analysis of their applications and effectiveness in software engineering.
arXiv Detail & Related papers (2024-08-05T14:01:15Z) - Insights from Social Shaping Theory: The Appropriation of Large Language Models in an Undergraduate Programming Course [0.9718746651638346]
Large language models (LLMs) can generate, debug, and explain code.
Our study explores how students' social perceptions influence their own LLM usage.
arXiv Detail & Related papers (2024-06-10T16:40:14Z) - Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing [56.75702900542643]
We introduce AlphaLLM for the self-improvements of Large Language Models.
It integrates Monte Carlo Tree Search (MCTS) with LLMs to establish a self-improving loop.
Our experimental results show that AlphaLLM significantly enhances the performance of LLMs without additional annotations.
arXiv Detail & Related papers (2024-04-18T15:21:34Z) - An Exploratory Study on Upper-Level Computing Students' Use of Large Language Models as Tools in a Semester-Long Project [2.7325338323814328]
The purpose of this study is to explore computing students' experiences and approaches to using LLMs during a semester-long software engineering project.
We collected data from a senior-level software engineering course at Purdue University.
We analyzed the data to identify themes related to students' usage patterns and learning outcomes.
arXiv Detail & Related papers (2024-03-27T15:21:58Z) - Rethinking Machine Unlearning for Large Language Models [85.92660644100582]
We explore machine unlearning in the domain of large language models (LLMs)
This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities.
arXiv Detail & Related papers (2024-02-13T20:51:58Z) - If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code
Empowers Large Language Models to Serve as Intelligent Agents [81.60906807941188]
Large language models (LLMs) are trained on a combination of natural language and formal language (code)
Code translates high-level goals into executable steps, featuring standard syntax, logical consistency, abstraction, and modularity.
arXiv Detail & Related papers (2024-01-01T16:51:20Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z) - Automatically Generating CS Learning Materials with Large Language
Models [4.526618922750769]
Large Language Models (LLMs) enable software developers to generate code based on a natural language prompt.
LLMs may enable students to interact with code in new ways while helping instructors scale their learning materials.
LLMs also introduce new implications for academic integrity, curriculum design, and software engineering careers.
arXiv Detail & Related papers (2022-12-09T20:37:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.