Understanding the Practices, Perceptions, and (Dis)Trust of Generative AI among Instructors: A Mixed-methods Study in the U.S. Higher Education
- URL: http://arxiv.org/abs/2502.05770v1
- Date: Sun, 09 Feb 2025 04:10:38 GMT
- Title: Understanding the Practices, Perceptions, and (Dis)Trust of Generative AI among Instructors: A Mixed-methods Study in the U.S. Higher Education
- Authors: Wenhan Lyu, Shuang Zhang, Tingting, Chung, Yifan Sun, Yixuan Zhang,
- Abstract summary: We surveyed 178 instructors from a single U.S. university to examine their current practices, perceptions, trust, and distrust of GenAI in higher education.<n>Our quantitative results show that trust and distrust in GenAI are related yet distinct; high trust does not necessarily imply low distrust, and vice versa.<n>Our qualitative results show nuanced manifestations of trust and distrust among surveyed instructors and various approaches to support calibrated trust in GenAI.
- Score: 18.929643075615637
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative AI (GenAI) has brought opportunities and challenges for higher education as it integrates into teaching and learning environments. As instructors navigate this new landscape, understanding their engagement with and attitudes toward GenAI is crucial. We surveyed 178 instructors from a single U.S. university to examine their current practices, perceptions, trust, and distrust of GenAI in higher education in March 2024. While most surveyed instructors reported moderate to high familiarity with GenAI-related concepts, their actual use of GenAI tools for direct instructional tasks remained limited. Our quantitative results show that trust and distrust in GenAI are related yet distinct; high trust does not necessarily imply low distrust, and vice versa. We also found significant differences in surveyed instructors' familiarity with GenAI across different trust and distrust groups. Our qualitative results show nuanced manifestations of trust and distrust among surveyed instructors and various approaches to support calibrated trust in GenAI. We discuss practical implications focused on (dis)trust calibration among instructors.
Related papers
- Human Trust in AI Search: A Large-Scale Experiment [0.07589017023705934]
generative artificial intelligence (GenAI) can influence what we buy, how we vote and our health.
No work establishes the causal effect of generative search designs on human trust.
We execute 12,000 search queries across seven countries, generating 80,000 real-time GenAI and traditional search results.
arXiv Detail & Related papers (2025-04-08T21:12:41Z) - Engineering Educators' Perspectives on the Impact of Generative AI in Higher Education [4.06279597585806]
This study reports findings from a survey of engineering educators on their use of and perspectives toward generative AI.<n>We asked them about their use of and comfort with GenAI, their overall perspectives on GenAI, the challenges and potential harms of using it for teaching, learning, and research, and examined whether their approach to using and integrating GenAI in their classroom influenced their experiences with GenAI and perceptions of it.
arXiv Detail & Related papers (2025-02-01T21:29:53Z) - Position: Evaluating Generative AI Systems is a Social Science Measurement Challenge [78.35388859345056]
We argue that the ML community would benefit from learning from and drawing on the social sciences when developing measurement instruments for evaluating GenAI systems.<n>We present a four-level framework, grounded in measurement theory from the social sciences, for measuring concepts related to the capabilities, behaviors, and impacts of GenAI.
arXiv Detail & Related papers (2025-02-01T21:09:51Z) - Analysis of Generative AI Policies in Computing Course Syllabi [3.7869332128069773]
Since the release of ChatGPT in 2022, Generative AI (GenAI) is increasingly being used in higher education computing classrooms across the U.S.
We collected 98 computing course syllabi from 54 R1 institutions in the U.S. and studied the GenAI policies they adopted and the surrounding discourse.
Our analysis shows that 1) most instructions related to GenAI use were as part of the academic integrity policy for the course and 2) most syllabi prohibited or restricted GenAI use, often warning students about the broader implications of using GenAI.
arXiv Detail & Related papers (2024-10-29T17:34:10Z) - Hey GPT, Can You be More Racist? Analysis from Crowdsourced Attempts to Elicit Biased Content from Generative AI [41.96102438774773]
This work presents the findings from a university-level competition, which challenged participants to design prompts for eliciting biased outputs from GenAI tools.
We quantitatively and qualitatively analyze the competition submissions and identify a diverse set of biases in GenAI and strategies employed by participants to induce bias in GenAI.
arXiv Detail & Related papers (2024-10-20T18:44:45Z) - Model-based Maintenance and Evolution with GenAI: A Look into the Future [47.93555901495955]
We argue that Generative Artificial Intelligence (GenAI) can be used as a means to address the limitations of Model-Based Engineering (MBM&E)
We propose that GenAI can be used in MBM&E for: reducing engineers' learning curve, maximizing efficiency with recommendations, or serving as a reasoning tool to understand domain problems.
arXiv Detail & Related papers (2024-07-09T23:13:26Z) - Understanding Student and Academic Staff Perceptions of AI Use in Assessment and Feedback [0.0]
The rise of Artificial Intelligence (AI) and Generative Artificial Intelligence (GenAI) in higher education necessitates assessment reform.
This study addresses a critical gap by exploring student and academic staff experiences with AI and GenAI tools.
An online survey collected data from 35 academic staff and 282 students across two universities in Vietnam and one in Singapore.
arXiv Detail & Related papers (2024-06-22T10:25:01Z) - Generative AI as a Learning Buddy and Teaching Assistant: Pre-service Teachers' Uses and Attitudes [0.8566597970144211]
We surveyed 167 Ghana PSTs' specific uses of generative artificial intelligence (GenAI) applications.
We identified three key factors shaping PSTs' attitudes towards GenAI: teaching, learning, and ethical and advocacy factors.
PSTs expressed concerns about the accuracy and trustworthiness of the information provided by GenAI applications.
arXiv Detail & Related papers (2024-06-03T20:38:29Z) - Faithful Knowledge Distillation [75.59907631395849]
We focus on two crucial questions with regard to a teacher-student pair: (i) do the teacher and student disagree at points close to correctly classified dataset examples, and (ii) is the distilled student as confident as the teacher around dataset examples?
These are critical questions when considering the deployment of a smaller student network trained from a robust teacher within a safety-critical setting.
arXiv Detail & Related papers (2023-06-07T13:41:55Z) - The AI generation gap: Are Gen Z students more interested in adopting
generative AI such as ChatGPT in teaching and learning than their Gen X and
Millennial Generation teachers? [0.0]
Gen Z students were generally optimistic about the potential benefits of generative AI (GenAI)
Gen X and Gen Y teachers expressed heightened concerns about overreliance, ethical and pedagogical implications.
arXiv Detail & Related papers (2023-05-04T14:42:06Z) - Creation and Evaluation of a Pre-tertiary Artificial Intelligence (AI)
Curriculum [58.86139968005518]
The Chinese University of Hong Kong (CUHK)-Jockey Club AI for the Future Project (AI4Future) co-created an AI curriculum for pre-tertiary education.
A team of 14 professors with expertise in engineering and education collaborated with 17 principals and teachers from 6 secondary schools to co-create the curriculum.
The co-creation process generated a variety of resources which enhanced the teachers knowledge in AI, as well as fostered teachers autonomy in bringing the subject matter into their classrooms.
arXiv Detail & Related papers (2021-01-19T11:26:19Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.