Adoption and Impact of ChatGPT in Computer Science Education: A Case Study on a Database Administration Course
- URL: http://arxiv.org/abs/2407.12145v1
- Date: Sun, 26 May 2024 20:51:28 GMT
- Title: Adoption and Impact of ChatGPT in Computer Science Education: A Case Study on a Database Administration Course
- Authors: Daniel López-Fernández, Ricardo Vergaz,
- Abstract summary: This contribution presents an exploratory and correlational study conducted with 37 students who used ChatGPT as a support tool to learn database administration.
The usage and perceived utility of ChatGPT were moderate, but positive correlations between student grade and ChatGPT usage were found.
- Score: 0.46040036610482665
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Contribution: The combination of ChatGPT with traditional learning resources is very effective in computer science education. High-performing students are the ones who are using ChatGPT the most. So, a new digital trench could be rising between these students and those with lower degree of fundamentals and worse prompting skills, who may not take advantage of all the ChatGPT possibilities. Background: The irruption of GenAI such as ChatGPT has changed the educational landscape. Therefore, methodological guidelines and more empirical experiences in computer science education are needed to better understand these tools and know how to use them to their fullest potential. Research Questions: This article addresses three questions. The first two explore the degree of use and perceived usefulness of ChatGPT among computer science students to learn database administration, where as the third one explore how the utilization of ChatGPT can impact academic performance. Methodology: This contribution presents an exploratory and correlational study conducted with 37 students who used ChatGPT as a support tool to learn database administration. The student grades and a comprehensive questionnaire were employed as research instruments. Findings: The obtained results indicate that traditional learning resources, such as teacher explanations and student reports, were widely used and correlated positively with student grade. The usage and perceived utility of ChatGPT were moderate, but positive correlations between student grade and ChatGPT usage were found. Indeed, a significantly higher use of this tool was identified among the group of outstanding students.
Related papers
- How Novice Programmers Use and Experience ChatGPT when Solving Programming Exercises in an Introductory Course [0.0]
This research paper contributes to the computing education research community's understanding of Generative AI (GenAI) in the context of introductory programming.
This study is guided by the following research questions:.
What do students report on their use pattern of ChatGPT in the context of introductory programming exercises?
How do students perceive ChatGPT in the context of introductory programming exercises?
arXiv Detail & Related papers (2024-07-30T12:55:42Z) - Investigation of the effectiveness of applying ChatGPT in Dialogic Teaching Using Electroencephalography [6.34494999013996]
Large language models (LLMs) possess the capability to interpret knowledge, answer questions, and consider context.
This research recruited 34 undergraduate students as participants, who were randomly divided into two groups.
The experimental group engaged in dialogic teaching using ChatGPT, while the control group interacted with human teachers.
arXiv Detail & Related papers (2024-03-25T12:23:12Z) - Integrating ChatGPT in a Computer Science Course: Students Perceptions
and Suggestions [0.0]
This experience report explores students' perceptions and suggestions for integrating ChatGPT in a computer science course.
Findings show the importance of carefully balancing using ChatGPT in computer science courses.
arXiv Detail & Related papers (2023-12-22T10:48:34Z) - ChatGPT for Teaching and Learning: An Experience from Data Science
Education [5.406386303264086]
ChatGPT, an implementation and application of large language models, has gained significant popularity since its initial release.
This paper aims to bridge that gap by utilizing ChatGPT in a data science course, gathering perspectives from students, and presenting our experiences and feedback on using ChatGPT for teaching and learning in data science education.
arXiv Detail & Related papers (2023-07-31T13:31:19Z) - Transformative Effects of ChatGPT on Modern Education: Emerging Era of
AI Chatbots [36.760677949631514]
ChatGPT was released to provide coherent and useful replies based on analysis of large volumes of data.
Our preliminary evaluation concludes that ChatGPT performed differently in each subject area including finance, coding and maths.
There are clear drawbacks in its use, such as the possibility of producing inaccurate or false data.
Academic regulations and evaluation practices need to be updated, should ChatGPT be used as a tool in education.
arXiv Detail & Related papers (2023-05-25T17:35:57Z) - Uncovering the Potential of ChatGPT for Discourse Analysis in Dialogue:
An Empirical Study [51.079100495163736]
This paper systematically inspects ChatGPT's performance in two discourse analysis tasks: topic segmentation and discourse parsing.
ChatGPT demonstrates proficiency in identifying topic structures in general-domain conversations yet struggles considerably in specific-domain conversations.
Our deeper investigation indicates that ChatGPT can give more reasonable topic structures than human annotations but only linearly parses the hierarchical rhetorical structures.
arXiv Detail & Related papers (2023-05-15T07:14:41Z) - ChatGPT Beyond English: Towards a Comprehensive Evaluation of Large
Language Models in Multilingual Learning [70.57126720079971]
Large language models (LLMs) have emerged as the most important breakthroughs in natural language processing (NLP)
This paper evaluates ChatGPT on 7 different tasks, covering 37 diverse languages with high, medium, low, and extremely low resources.
Compared to the performance of previous models, our extensive experimental results demonstrate a worse performance of ChatGPT for different NLP tasks and languages.
arXiv Detail & Related papers (2023-04-12T05:08:52Z) - To ChatGPT, or not to ChatGPT: That is the question! [78.407861566006]
This study provides a comprehensive and contemporary assessment of the most recent techniques in ChatGPT detection.
We have curated a benchmark dataset consisting of prompts from ChatGPT and humans, including diverse questions from medical, open Q&A, and finance domains.
Our evaluation results demonstrate that none of the existing methods can effectively detect ChatGPT-generated content.
arXiv Detail & Related papers (2023-04-04T03:04:28Z) - ChatGPT is a Knowledgeable but Inexperienced Solver: An Investigation of Commonsense Problem in Large Language Models [49.52083248451775]
Large language models (LLMs) have made significant progress in NLP.
We specifically focus on ChatGPT, a widely used and easily accessible LLM.
We conduct a series of experiments on 11 datasets to evaluate ChatGPT's commonsense abilities.
arXiv Detail & Related papers (2023-03-29T03:05:43Z) - Towards Making the Most of ChatGPT for Machine Translation [75.576405098545]
ChatGPT shows remarkable capabilities for machine translation (MT)
Several prior studies have shown that it achieves comparable results to commercial systems for high-resource languages.
arXiv Detail & Related papers (2023-03-24T03:35:21Z) - Can ChatGPT Understand Too? A Comparative Study on ChatGPT and
Fine-tuned BERT [103.57103957631067]
ChatGPT has attracted great attention, as it can generate fluent and high-quality responses to human inquiries.
We evaluate ChatGPT's understanding ability by evaluating it on the most popular GLUE benchmark, and comparing it with 4 representative fine-tuned BERT-style models.
We find that: 1) ChatGPT falls short in handling paraphrase and similarity tasks; 2) ChatGPT outperforms all BERT models on inference tasks by a large margin; 3) ChatGPT achieves comparable performance compared with BERT on sentiment analysis and question answering tasks.
arXiv Detail & Related papers (2023-02-19T12:29:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.