"Give me the code" -- Log Analysis of First-Year CS Students' Interactions With GPT
- URL: http://arxiv.org/abs/2411.17855v2
- Date: Sun, 01 Dec 2024 19:02:28 GMT
- Title: "Give me the code" -- Log Analysis of First-Year CS Students' Interactions With GPT
- Authors: Pedro Alves, Bruno Pereira Cipriano,
- Abstract summary: This paper analyzes the prompts used by 69 freshmen undergraduate students to solve a certain programming problem within a project assignment.
Despite using unsophisticated prompting techniques, our findings suggest that the majority of students successfully leveraged GPT.
Half of the students demonstrated the ability to exercise judgment in selecting from multiple GPT-generated solutions.
- Score: 0.0
- License:
- Abstract: The impact of Large Language Models (LLMs) like GPT-3, GPT-4, and Bard in computer science (CS) education is expected to be profound. Students now have the power to generate code solutions for a wide array of programming assignments. For first-year students, this may be particularly problematic since the foundational skills are still in development and an over-reliance on generative AI tools can hinder their ability to grasp essential programming concepts. This paper analyzes the prompts used by 69 freshmen undergraduate students to solve a certain programming problem within a project assignment, without giving them prior prompt training. We also present the rules of the exercise that motivated the prompts, designed to foster critical thinking skills during the interaction. Despite using unsophisticated prompting techniques, our findings suggest that the majority of students successfully leveraged GPT, incorporating the suggested solutions into their projects. Additionally, half of the students demonstrated the ability to exercise judgment in selecting from multiple GPT-generated solutions, showcasing the development of their critical thinking skills in evaluating AI-generated code.
Related papers
- Impeding LLM-assisted Cheating in Introductory Programming Assignments via Adversarial Perturbation [42.49889252988544]
Large language model (LLM)-based programming assistants can help improve the productivity of professional software developers, but can also facilitate cheating in introductory computer programming courses.
This paper investigates the baseline performance of 5 widely used LLMs on a collection of introductory programming problems, examines adversarial perturbations to degrade their performance, and describes the results of a user study aimed at understanding the efficacy of such perturbations in hindering actual code generation for introductory programming assignments.
arXiv Detail & Related papers (2024-10-12T01:01:00Z) - Could ChatGPT get an Engineering Degree? Evaluating Higher Education Vulnerability to AI Assistants [176.39275404745098]
We evaluate whether two AI assistants, GPT-3.5 and GPT-4, can adequately answer assessment questions.
GPT-4 answers an average of 65.8% of questions correctly, and can even produce the correct answer across at least one prompting strategy for 85.1% of questions.
Our results call for revising program-level assessment design in higher education in light of advances in generative AI.
arXiv Detail & Related papers (2024-08-07T12:11:49Z) - Evaluating Contextually Personalized Programming Exercises Created with Generative AI [4.046163999707179]
This article reports on a user study conducted in an elective programming course that included contextually personalized programming exercises created with GPT-4.
The results demonstrate that the quality of exercises generated with GPT-4 was generally high.
This suggests that AI-generated programming problems can be a worthwhile addition to introductory programming courses.
arXiv Detail & Related papers (2024-06-11T12:59:52Z) - "ChatGPT Is Here to Help, Not to Replace Anybody" -- An Evaluation of Students' Opinions On Integrating ChatGPT In CS Courses [0.0]
Large Language Models (LLMs) like GPT and Bard are capable of producing code based on textual descriptions.
LLMs will have profound implications for computing education, raising concerns about cheating, excessive dependence, and a decline in computational thinking skills.
arXiv Detail & Related papers (2024-04-26T14:29:16Z) - Boosting of Thoughts: Trial-and-Error Problem Solving with Large Language Models [43.09706839884221]
Boosting of Thoughts (BoT) is an automated prompting framework for problem solving with Large Language Models.
We show that BoT consistently achieves higher or comparable problem-solving rates than other advanced prompting approaches.
arXiv Detail & Related papers (2024-02-17T00:13:36Z) - Students' Perspective on AI Code Completion: Benefits and Challenges [2.936007114555107]
We investigated the benefits, challenges, and expectations of AI code completion from students' perspectives.
Our findings show that AI code completion enhanced students' productivity and efficiency by providing correct syntax suggestions.
In the future, AI code completion should be explainable and provide best coding practices to enhance the education process.
arXiv Detail & Related papers (2023-10-31T22:41:16Z) - Exploring the Potential of Large Language Models to Generate Formative
Programming Feedback [0.5371337604556311]
We explore the potential of large language models (LLMs) for computing educators and learners.
To achieve these goals, we used students' programming sequences from a dataset gathered within a CS1 course as input for ChatGPT.
Results show that ChatGPT performs reasonably well for some of the introductory programming tasks and student errors.
However, educators should provide guidance on how to use the provided feedback, as it can contain misleading information for novices.
arXiv Detail & Related papers (2023-08-31T15:22:11Z) - AGI: Artificial General Intelligence for Education [41.45039606933712]
This position paper reviews artificial general intelligence (AGI)'s key concepts, capabilities, scope, and potential within future education.
It highlights that AGI can significantly improve intelligent tutoring systems, educational assessment, and evaluation procedures.
The paper emphasizes that AGI's capabilities extend to understanding human emotions and social interactions.
arXiv Detail & Related papers (2023-04-24T22:31:59Z) - Exploring the Use of ChatGPT as a Tool for Learning and Assessment in
Undergraduate Computer Science Curriculum: Opportunities and Challenges [0.3553493344868413]
This paper addresses the prospects and obstacles associated with utilizing ChatGPT as a tool for learning and assessment in undergraduate Computer Science curriculum.
Group B students were given access to ChatGPT and were encouraged to use it to help solve the programming challenges.
Results show that students using ChatGPT had an advantage in terms of earned scores, however there were inconsistencies and inaccuracies in the submitted code.
arXiv Detail & Related papers (2023-04-16T21:04:52Z) - JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem
Understanding [74.12405417718054]
This paper aims to advance the mathematical intelligence of machines by presenting the first Chinese mathematical pre-trained language model(PLM)
Unlike other standard NLP tasks, mathematical texts are difficult to understand, since they involve mathematical terminology, symbols and formulas in the problem statement.
We design a novel curriculum pre-training approach for improving the learning of mathematical PLMs, consisting of both basic and advanced courses.
arXiv Detail & Related papers (2022-06-13T17:03:52Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.