"It's not like Jarvis, but it's pretty close!" -- Examining ChatGPT's
Usage among Undergraduate Students in Computer Science
- URL: http://arxiv.org/abs/2311.09651v2
- Date: Fri, 5 Jan 2024 15:47:03 GMT
- Title: "It's not like Jarvis, but it's pretty close!" -- Examining ChatGPT's
Usage among Undergraduate Students in Computer Science
- Authors: Ishika Joshi, Ritvik Budhiraja, Harshal D Akolekar, Jagat Sesh Challa,
Dhruv Kumar
- Abstract summary: Large language models (LLMs) such as ChatGPT and Google Bard have garnered significant attention in the academic community.
This study adopts a student-first approach to comprehensively understand how undergraduate computer science students utilize ChatGPT.
- Score: 3.6936132187945923
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) such as ChatGPT and Google Bard have garnered
significant attention in the academic community. Previous research has
evaluated these LLMs for various applications such as generating programming
exercises and solutions. However, these evaluations have predominantly been
conducted by instructors and researchers, not considering the actual usage of
LLMs by students. This study adopts a student-first approach to comprehensively
understand how undergraduate computer science students utilize ChatGPT, a
popular LLM, released by OpenAI. We employ a combination of student surveys and
interviews to obtain valuable insights into the benefits, challenges, and
suggested improvements related to ChatGPT. Our findings suggest that a majority
of students (over 57%) have a convincingly positive outlook towards adopting
ChatGPT as an aid in coursework-related tasks. However, our research also
highlights various challenges that must be resolved for long-term acceptance of
ChatGPT amongst students. The findings from this investigation have broader
implications and may be applicable to other LLMs and their role in computing
education.
Related papers
- Adoption and Impact of ChatGPT in Computer Science Education: A Case Study on a Database Administration Course [0.46040036610482665]
This contribution presents an exploratory and correlational study conducted with 37 students who used ChatGPT as a support tool to learn database administration.
The usage and perceived utility of ChatGPT were moderate, but positive correlations between student grade and ChatGPT usage were found.
arXiv Detail & Related papers (2024-05-26T20:51:28Z) - Enhancing Programming Education with ChatGPT: A Case Study on Student Perceptions and Interactions in a Python Course [7.182952031323369]
This paper explores ChatGPT's impact on learning in a Python programming course tailored for first-year students over eight weeks.
By analyzing responses from surveys, open-ended questions, and student-ChatGPT dialog data, we aim to provide a comprehensive view of ChatGPT's utility.
Our study uncovers a generally positive reception toward ChatGPT and offers insights into its role in enhancing the programming education experience.
arXiv Detail & Related papers (2024-03-20T15:47:28Z) - Integrating ChatGPT in a Computer Science Course: Students Perceptions
and Suggestions [0.0]
This experience report explores students' perceptions and suggestions for integrating ChatGPT in a computer science course.
Findings show the importance of carefully balancing using ChatGPT in computer science courses.
arXiv Detail & Related papers (2023-12-22T10:48:34Z) - ChatGPT's One-year Anniversary: Are Open-Source Large Language Models
Catching up? [71.12709925152784]
ChatGPT has brought a seismic shift in the entire landscape of AI.
It showed that a model could answer human questions and follow instructions on a broad panel of tasks.
While closed-source LLMs generally outperform their open-source counterparts, the progress on the latter has been rapid.
This has crucial implications not only on research but also on business.
arXiv Detail & Related papers (2023-11-28T17:44:51Z) - The Shifted and The Overlooked: A Task-oriented Investigation of
User-GPT Interactions [114.67699010359637]
We analyze a large-scale collection of real user queries to GPT.
We find that tasks such as design'' and planning'' are prevalent in user interactions but are largely neglected or different from traditional NLP benchmarks.
arXiv Detail & Related papers (2023-10-19T02:12:17Z) - "With Great Power Comes Great Responsibility!": Student and Instructor
Perspectives on the influence of LLMs on Undergraduate Engineering Education [2.766654468164438]
The rise in popularity of Large Language Models (LLMs) has prompted discussions in academic circles.
This paper conducts surveys and interviews within undergraduate engineering universities in India.
Using 1306 survey responses among students, 112 student interviews, and 27 instructor interviews, this paper offers insights into the current usage patterns.
arXiv Detail & Related papers (2023-09-19T15:29:12Z) - Unreflected Acceptance -- Investigating the Negative Consequences of
ChatGPT-Assisted Problem Solving in Physics Education [4.014729339820806]
The impact of large language models (LLMs) on sensitive areas of everyday life, such as education, remains unclear.
Our work focuses on higher physics education and examines problem solving strategies.
arXiv Detail & Related papers (2023-08-21T16:14:34Z) - ChatGPT Beyond English: Towards a Comprehensive Evaluation of Large
Language Models in Multilingual Learning [70.57126720079971]
Large language models (LLMs) have emerged as the most important breakthroughs in natural language processing (NLP)
This paper evaluates ChatGPT on 7 different tasks, covering 37 diverse languages with high, medium, low, and extremely low resources.
Compared to the performance of previous models, our extensive experimental results demonstrate a worse performance of ChatGPT for different NLP tasks and languages.
arXiv Detail & Related papers (2023-04-12T05:08:52Z) - A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on
Reasoning, Hallucination, and Interactivity [79.12003701981092]
We carry out an extensive technical evaluation of ChatGPT using 23 data sets covering 8 different common NLP application tasks.
We evaluate the multitask, multilingual and multi-modal aspects of ChatGPT based on these data sets and a newly designed multimodal dataset.
ChatGPT is 63.41% accurate on average in 10 different reasoning categories under logical reasoning, non-textual reasoning, and commonsense reasoning.
arXiv Detail & Related papers (2023-02-08T12:35:34Z) - Is ChatGPT a General-Purpose Natural Language Processing Task Solver? [113.22611481694825]
Large language models (LLMs) have demonstrated the ability to perform a variety of natural language processing (NLP) tasks zero-shot.
Recently, the debut of ChatGPT has drawn a great deal of attention from the natural language processing (NLP) community.
It is not yet known whether ChatGPT can serve as a generalist model that can perform many NLP tasks zero-shot.
arXiv Detail & Related papers (2023-02-08T09:44:51Z) - A Categorical Archive of ChatGPT Failures [47.64219291655723]
ChatGPT, developed by OpenAI, has been trained using massive amounts of data and simulates human conversation.
It has garnered significant attention due to its ability to effectively answer a broad range of human inquiries.
However, a comprehensive analysis of ChatGPT's failures is lacking, which is the focus of this study.
arXiv Detail & Related papers (2023-02-06T04:21:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.