Plagiarism and AI Assistance Misuse in Web Programming: Unfair Benefits
and Characteristics
- URL: http://arxiv.org/abs/2310.20104v1
- Date: Tue, 31 Oct 2023 00:51:14 GMT
- Title: Plagiarism and AI Assistance Misuse in Web Programming: Unfair Benefits
and Characteristics
- Authors: Oscar Karnalim, Hapnes Toba, Meliana Christianti Johan, Erico Darmawan
Handoyo, Yehezkiel David Setiawan, Josephine Alvina Luwia
- Abstract summary: Plagiarized submissions are similar to the independent ones except in trivial aspects such as color and identifier names.
Students believe AI assistance could be useful given proper acknowledgment of the use, although they are not convinced with readability and correctness of the solutions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In programming education, plagiarism and misuse of artificial intelligence
(AI) assistance are emerging issues. However, not many relevant studies are
focused on web programming. We plan to develop automated tools to help
instructors identify both misconducts. To fully understand the issues, we
conducted a controlled experiment to observe the unfair benefits and the
characteristics. We compared student performance in completing web programming
tasks independently, with a submission to plagiarize, and with the help of AI
assistance (ChatGPT). Our study shows that students who are involved in such
misconducts get comparable test marks with less completion time. Plagiarized
submissions are similar to the independent ones except in trivial aspects such
as color and identifier names. AI-assisted submissions are more complex, making
them less readable. Students believe AI assistance could be useful given proper
acknowledgment of the use, although they are not convinced with readability and
correctness of the solutions.
Related papers
- A Multi-Year Grey Literature Review on AI-assisted Test Automation [46.97326049485643]
Test Automation (TA) techniques are crucial for quality assurance in software engineering.
TA techniques face limitations such as high test suite maintenance costs and the need for extensive programming skills.
Artificial Intelligence (AI) offers new opportunities to address these issues through automation and improved practices.
arXiv Detail & Related papers (2024-08-12T15:26:36Z) - I would love this to be like an assistant, not the teacher: a voice of the customer perspective of what distance learning students want from an Artificial Intelligence Digital Assistant [0.0]
This study examined the perceptions of ten online and distance learning students regarding the design of a hypothetical AI Digital Assistant (AIDA)
All participants agreed on the usefulness of such an AI tool while studying and reported benefits from using it for real-time assistance and query resolution, support for academic tasks, personalisation and accessibility, together with emotional and social support.
Students' concerns related to the ethical and social implications of implementing AIDA, data privacy and data use, operational challenges, academic integrity and misuse, and the future of education.
arXiv Detail & Related papers (2024-02-16T08:10:41Z) - PaperCard for Reporting Machine Assistance in Academic Writing [48.33722012818687]
ChatGPT, a question-answering system released by OpenAI in November 2022, has demonstrated a range of capabilities that could be utilised in producing academic papers.
This raises critical questions surrounding the concept of authorship in academia.
We propose a framework we name "PaperCard", a documentation for human authors to transparently declare the use of AI in their writing process.
arXiv Detail & Related papers (2023-10-07T14:28:04Z) - Learning from Teaching Assistants to Program with Subgoals: Exploring
the Potential for AI Teaching Assistants [18.14390906820148]
We investigate the practicality of using generative AI as TAs in programming education by examining novice learners' interaction with TAs in a subgoal learning environment.
Our study shows that learners can solve tasks faster with comparable scores with AI TAs.
We suggest guidelines to better design and utilize generative AI as TAs in programming education from the result of our chat log analysis.
arXiv Detail & Related papers (2023-09-19T08:30:58Z) - Generation Probabilities Are Not Enough: Uncertainty Highlighting in AI Code Completions [54.55334589363247]
We study whether conveying information about uncertainty enables programmers to more quickly and accurately produce code.
We find that highlighting tokens with the highest predicted likelihood of being edited leads to faster task completion and more targeted edits.
arXiv Detail & Related papers (2023-02-14T18:43:34Z) - Smart tutor to provide feedback in programming courses [0.0]
We present an AI based intelligent tutor that answers students programming questions.
The tool has been tested by university students at the URJC along a whole course.
arXiv Detail & Related papers (2023-01-24T11:00:06Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - Plagiarism deterrence for introductory programming [11.612194979331179]
A class-wide statistical characterization can be clearly shared with students via an intuitive new p-value.
A pairwise, compression-based similarity detection algorithm captures relationships between assignments more accurately.
An unbiased scoring system aids students and the instructor in understanding true independence of effort.
arXiv Detail & Related papers (2022-06-06T18:47:25Z) - Neural Language Models are Effective Plagiarists [38.85940137464184]
We find that a student using GPT-J can complete introductory level programming assignments without triggering suspicion from MOSS.
GPT-J was not trained on the problems in question and is not provided with any examples to work from.
We conclude that the code written by GPT-J is diverse in structure, lacking any particular tells that future plagiarism detection techniques may use to try to identify algorithmically generated code.
arXiv Detail & Related papers (2022-01-19T04:00:46Z) - AI Explainability 360: Impact and Design [120.95633114160688]
In 2019, we created AI Explainability 360 (Arya et al. 2020), an open source software toolkit featuring ten diverse and state-of-the-art explainability methods.
This paper examines the impact of the toolkit with several case studies, statistics, and community feedback.
The paper also describes the flexible design of the toolkit, examples of its use, and the significant educational material and documentation available to its users.
arXiv Detail & Related papers (2021-09-24T19:17:09Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.