Misconceptions, Pragmatism, and Value Tensions: Evaluating Students' Understanding and Perception of Generative AI for Education
- URL: http://arxiv.org/abs/2410.22289v1
- Date: Tue, 29 Oct 2024 17:41:06 GMT
- Title: Misconceptions, Pragmatism, and Value Tensions: Evaluating Students' Understanding and Perception of Generative AI for Education
- Authors: Aditya Johri, Ashish Hingle, Johannes Schleiss,
- Abstract summary: Students are early adopters of the technology, utilizing it in atypical ways.
Students were asked to describe 1) their understanding of GenAI; 2) their use of GenAI; 3) their opinions on the benefits, downsides, and ethical issues pertaining to its use in education.
- Score: 0.11704154007740832
- License:
- Abstract: In this research paper we examine undergraduate students' use of and perceptions of generative AI (GenAI). Students are early adopters of the technology, utilizing it in atypical ways and forming a range of perceptions and aspirations about it. To understand where and how students are using these tools and how they view them, we present findings from an open-ended survey response study with undergraduate students pursuing information technology degrees. Students were asked to describe 1) their understanding of GenAI; 2) their use of GenAI; 3) their opinions on the benefits, downsides, and ethical issues pertaining to its use in education; and 4) how they envision GenAI could ideally help them with their education. Findings show that students' definitions of GenAI differed substantially and included many misconceptions - some highlight it as a technique, an application, or a tool, while others described it as a type of AI. There was a wide variation in the use of GenAI by students, with two common uses being writing and coding. They identified the ability of GenAI to summarize information and its potential to personalize learning as an advantage. Students identified two primary ethical concerns with using GenAI: plagiarism and dependency, which means that students do not learn independently. They also cautioned that responses from GenAI applications are often untrustworthy and need verification. Overall, they appreciated that they could do things quickly with GenAI but were cautious as using the technology was not necessarily in their best long-term as it interfered with the learning process. In terms of aspirations for GenAI, students expressed both practical advantages and idealistic and improbable visions. They said it could serve as a tutor or coach and allow them to understand the material better. We discuss the implications of the findings for student learning and instruction.
Related papers
- Analysis of Generative AI Policies in Computing Course Syllabi [3.7869332128069773]
Since the release of ChatGPT in 2022, Generative AI (GenAI) is increasingly being used in higher education computing classrooms across the U.S.
We collected 98 computing course syllabi from 54 R1 institutions in the U.S. and studied the GenAI policies they adopted and the surrounding discourse.
Our analysis shows that 1) most instructions related to GenAI use were as part of the academic integrity policy for the course and 2) most syllabi prohibited or restricted GenAI use, often warning students about the broader implications of using GenAI.
arXiv Detail & Related papers (2024-10-29T17:34:10Z) - Model-based Maintenance and Evolution with GenAI: A Look into the Future [47.93555901495955]
We argue that Generative Artificial Intelligence (GenAI) can be used as a means to address the limitations of Model-Based Engineering (MBM&E)
We propose that GenAI can be used in MBM&E for: reducing engineers' learning curve, maximizing efficiency with recommendations, or serving as a reasoning tool to understand domain problems.
arXiv Detail & Related papers (2024-07-09T23:13:26Z) - Understanding Student and Academic Staff Perceptions of AI Use in Assessment and Feedback [0.0]
The rise of Artificial Intelligence (AI) and Generative Artificial Intelligence (GenAI) in higher education necessitates assessment reform.
This study addresses a critical gap by exploring student and academic staff experiences with AI and GenAI tools.
An online survey collected data from 35 academic staff and 282 students across two universities in Vietnam and one in Singapore.
arXiv Detail & Related papers (2024-06-22T10:25:01Z) - Identifying and Mitigating the Security Risks of Generative AI [179.2384121957896]
This paper reports the findings of a workshop held at Google on the dual-use dilemma posed by GenAI.
GenAI can be used just as well by attackers to generate new attacks and increase the velocity and efficacy of existing attacks.
We discuss short-term and long-term goals for the community on this topic.
arXiv Detail & Related papers (2023-08-28T18:51:09Z) - Innovating Computer Programming Pedagogy: The AI-Lab Framework for
Generative AI Adoption [0.0]
We introduce "AI-Lab," a framework for guiding students in effectively leveraging GenAI within core programming courses.
By identifying and rectifying GenAI's errors, students enrich their learning process.
For educators, AI-Lab provides mechanisms to explore students' perceptions of GenAI's role in their learning experience.
arXiv Detail & Related papers (2023-08-23T17:20:37Z) - The AI generation gap: Are Gen Z students more interested in adopting
generative AI such as ChatGPT in teaching and learning than their Gen X and
Millennial Generation teachers? [0.0]
Gen Z students were generally optimistic about the potential benefits of generative AI (GenAI)
Gen X and Gen Y teachers expressed heightened concerns about overreliance, ethical and pedagogical implications.
arXiv Detail & Related papers (2023-05-04T14:42:06Z) - Students' Voices on Generative AI: Perceptions, Benefits, and Challenges
in Higher Education [2.0711789781518752]
This study explores university students' perceptions of generative AI (GenAI) technologies, such as ChatGPT, in higher education.
Students recognized the potential for personalized learning support, writing and brainstorming assistance, and research and analysis capabilities.
Concerns about accuracy, privacy, ethical issues, and the impact on personal development, career prospects, and societal values were also expressed.
arXiv Detail & Related papers (2023-04-29T15:53:38Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.