Probeable Problems for Beginner-level Programming-with-AI Contests
- URL: http://arxiv.org/abs/2405.15123v1
- Date: Fri, 24 May 2024 00:39:32 GMT
- Title: Probeable Problems for Beginner-level Programming-with-AI Contests
- Authors: Mrigank Pawagi, Viraj Kumar,
- Abstract summary: We conduct a 2-hour programming contest for undergraduate Computer Science students from multiple institutions.
Students were permitted to work individually or in groups, and were free to use AI tools.
We analyze the extent to which the code submitted by these groups identifies missing details and identify ways in which Probeable Problems can support learning in formal and informal CS educational contexts.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To broaden participation, competitive programming contests may include beginner-level problems that do not require knowledge of advanced Computer Science concepts (e.g., algorithms and data structures). However, since most participants have easy access to AI code-generation tools, these problems often become trivial to solve. For beginner-friendly programming contests that do not prohibit the use of AI tools, we propose Probeable Problems: code writing tasks that provide (1) a problem specification that deliberately omits certain details, and (2) a mechanism to probe for these details by asking clarifying questions and receiving immediate feedback. To evaluate our proposal, we conducted a 2-hour programming contest for undergraduate Computer Science students from multiple institutions, where each student was an active member of their institution's computing club. The contest comprised of six Probeable Problems for which a popular code-generation tool (GitHub Copilot) was unable to generate accurate solutions due to the absence of details. Students were permitted to work individually or in groups, and were free to use AI tools. We obtained consent from 26 groups (67 students) to use their submissions for research. We analyze the extent to which the code submitted by these groups identifies missing details and identify ways in which Probeable Problems can support learning in formal and informal CS educational contexts.
Related papers
- Estimating Difficulty Levels of Programming Problems with Pre-trained Model [18.92661958433282]
The difficulty level of each programming problem serves as an essential reference for guiding students' adaptive learning.
We formulate the problem of automatic difficulty level estimation of each programming problem, given its textual description and a solution example of code.
For tackling this problem, we propose to couple two pre-trained models, one for text modality and the other for code modality, into a unified model.
arXiv Detail & Related papers (2024-06-13T05:38:20Z) - A Knowledge-Component-Based Methodology for Evaluating AI Assistants [9.412070852474313]
We evaluate an automatic hint generator for CS1 programming assignments powered by GPT-4.
This system provides natural language guidance about how students can improve their incorrect solutions to short programming exercises.
arXiv Detail & Related papers (2024-06-09T00:58:39Z) - The Widening Gap: The Benefits and Harms of Generative AI for Novice Programmers [1.995977018536036]
Novice programmers often struggle through programming problem solving due to a lack of metacognitive awareness and strategies.
Many novices are now programming with generative AI (GenAI)
Our findings show an unfortunate divide in the use of GenAI tools between students who accelerated and students who struggled.
arXiv Detail & Related papers (2024-05-28T01:48:28Z) - Dcc --help: Generating Context-Aware Compiler Error Explanations with
Large Language Models [53.04357141450459]
dcc --help was deployed to our CS1 and CS2 courses, with 2,565 students using the tool over 64,000 times in ten weeks.
We found that the LLM-generated explanations were conceptually accurate in 90% of compile-time and 75% of run-time cases, but often disregarded the instruction not to provide solutions in code.
arXiv Detail & Related papers (2023-08-23T02:36:19Z) - Exploring the Use of ChatGPT as a Tool for Learning and Assessment in
Undergraduate Computer Science Curriculum: Opportunities and Challenges [0.3553493344868413]
This paper addresses the prospects and obstacles associated with utilizing ChatGPT as a tool for learning and assessment in undergraduate Computer Science curriculum.
Group B students were given access to ChatGPT and were encouraged to use it to help solve the programming challenges.
Results show that students using ChatGPT had an advantage in terms of earned scores, however there were inconsistencies and inaccuracies in the submitted code.
arXiv Detail & Related papers (2023-04-16T21:04:52Z) - Generation Probabilities Are Not Enough: Uncertainty Highlighting in AI Code Completions [54.55334589363247]
We study whether conveying information about uncertainty enables programmers to more quickly and accurately produce code.
We find that highlighting tokens with the highest predicted likelihood of being edited leads to faster task completion and more targeted edits.
arXiv Detail & Related papers (2023-02-14T18:43:34Z) - Competition-Level Code Generation with AlphaCode [74.87216298566942]
We introduce AlphaCode, a system for code generation that can create novel solutions to problems that require deeper reasoning.
In simulated evaluations on recent programming competitions on the Codeforces platform, AlphaCode achieved on average a ranking of top 54.3%.
arXiv Detail & Related papers (2022-02-08T23:16:31Z) - Steps Before Syntax: Helping Novice Programmers Solve Problems using the
PCDIT Framework [2.768397481213625]
Novice programmers often struggle with problem solving due to the high cognitive loads they face.
Many introductory programming courses do not explicitly teach it, assuming that problem solving skills are acquired along the way.
We present 'PCDIT', a non-linear problem solving framework that provides scaffolding to guide novice programmers through the process of transforming a problem specification into an implemented and tested solution for an imperative programming language.
arXiv Detail & Related papers (2021-09-18T10:31:15Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Measuring Coding Challenge Competence With APPS [54.22600767666257]
We introduce APPS, a benchmark for code generation.
Our benchmark includes 10,000 problems, which range from having simple one-line solutions to being substantial algorithmic challenges.
Recent models such as GPT-Neo can pass approximately 15% of the test cases of introductory problems.
arXiv Detail & Related papers (2021-05-20T17:58:42Z) - BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration [72.88493072196094]
We present a new synthesis approach that leverages learning to guide a bottom-up search over programs.
In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a set of input-output examples.
We show that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches.
arXiv Detail & Related papers (2020-07-28T17:46:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.