To accept or not to accept? An IRT-TOE Framework to Understand Educators' Resistance to Generative AI in Higher Education
- URL: http://arxiv.org/abs/2407.20130v1
- Date: Mon, 29 Jul 2024 15:59:19 GMT
- Title: To accept or not to accept? An IRT-TOE Framework to Understand Educators' Resistance to Generative AI in Higher Education
- Authors: Jan-Erik Kalmus, Anastasija Nikiforova,
- Abstract summary: This study aims to develop a theoretical model to empirically predict the barriers preventing educators from adopting Generative Artificial Intelligence in their classrooms.
Our approach is grounded in the Innovation Resistance Theory (IRT) framework and augmented with constructs from the Technology-Organization-Environment (TOE) framework.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Since the public release of Chat Generative Pre-Trained Transformer (ChatGPT), extensive discourse has emerged concerning the potential advantages and challenges of integrating Generative Artificial Intelligence (GenAI) into education. In the realm of information systems, research on technology adoption is crucial for understanding the diverse factors influencing the uptake of specific technologies. Theoretical frameworks, refined and validated over decades, serve as guiding tools to elucidate the individual and organizational dynamics, obstacles, and perceptions surrounding technology adoption. However, while several models have been proposed, they often prioritize elucidating the factors that facilitate acceptance over those that impede it, typically focusing on the student perspective and leaving a gap in empirical evidence regarding educators viewpoints. Given the pivotal role educators play in higher education, this study aims to develop a theoretical model to empirically predict the barriers preventing educators from adopting GenAI in their classrooms. Acknowledging the lack of theoretical models tailored to identifying such barriers, our approach is grounded in the Innovation Resistance Theory (IRT) framework and augmented with constructs from the Technology-Organization-Environment (TOE) framework. This model is transformed into a measurement instrument employing a quantitative approach, complemented by a qualitative approach to enrich the analysis and uncover concerns related to GenAI adoption in the higher education domain.
Related papers
- Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.
We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.
We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - A Conceptual Exploration of Generative AI-Induced Cognitive Dissonance and its Emergence in University-Level Academic Writing [0.0]
This work explores how Generative Artificial Intelligence (GenAI) serves as both a trigger and amplifier of cognitive dissonance (CD)
We introduce a hypothetical construct of GenAI-induced CD, illustrating the tension between AI-driven efficiency and the principles of originality, effort, and intellectual ownership.
We discuss strategies to mitigate this dissonance, including reflective pedagogy, AI literacy programs, transparency in GenAI use, and discipline-specific task redesigns.
arXiv Detail & Related papers (2025-02-08T21:31:04Z) - The AI Assessment Scale Revisited: A Framework for Educational Assessment [0.0]
Recent developments in Generative Artificial Intelligence (GenAI) have created significant uncertainty in education.
We present an updated version of the AI Assessment Scale (AIAS), a framework with two fundamental purposes.
arXiv Detail & Related papers (2024-12-12T07:44:52Z) - Generative AI Adoption in Classroom in Context of Technology Acceptance Model (TAM) and the Innovation Diffusion Theory (IDT) [1.9659095632676098]
This study aims to dissect the underlying factors influencing educators' perceptions and acceptance of GenAI and LLMs.
Our investigation reveals a strong positive correlation between the perceived usefulness of GenAI tools and their acceptance.
The perceived ease of use emerged as a significant factor, though to a lesser extent, influencing acceptance.
arXiv Detail & Related papers (2024-03-29T22:41:51Z) - Bringing Generative AI to Adaptive Learning in Education [58.690250000579496]
We shed light on the intersectional studies of generative AI and adaptive learning.
We argue that this union will contribute significantly to the development of the next-stage learning format in education.
arXiv Detail & Related papers (2024-02-02T23:54:51Z) - A Survey of Reasoning with Foundation Models [235.7288855108172]
Reasoning plays a pivotal role in various real-world settings such as negotiation, medical diagnosis, and criminal investigation.
We introduce seminal foundation models proposed or adaptable for reasoning.
We then delve into the potential future directions behind the emergence of reasoning abilities within foundation models.
arXiv Detail & Related papers (2023-12-17T15:16:13Z) - Multimodality of AI for Education: Towards Artificial General
Intelligence [14.121655991753483]
multimodal artificial intelligence (AI) approaches are paving the way towards the realization of Artificial General Intelligence (AGI) in educational contexts.
This research delves deeply into the key facets of AGI, including cognitive frameworks, advanced knowledge representation, adaptive learning mechanisms, and the integration of diverse multimodal data sources.
The paper also discusses the implications of multimodal AI's role in education, offering insights into future directions and challenges in AGI development.
arXiv Detail & Related papers (2023-12-10T23:32:55Z) - Towards a General Framework for Continual Learning with Pre-training [55.88910947643436]
We present a general framework for continual learning of sequentially arrived tasks with the use of pre-training.
We decompose its objective into three hierarchical components, including within-task prediction, task-identity inference, and task-adaptive prediction.
We propose an innovative approach to explicitly optimize these components with parameter-efficient fine-tuning (PEFT) techniques and representation statistics.
arXiv Detail & Related papers (2023-10-21T02:03:38Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - Confronting Structural Inequities in AI for Education [5.371816551086117]
We argue that the dominant paradigm of evaluating fairness on the basis of performance disparities in AI models is inadequate for confronting the structural inequities that educational AI systems (re)produce.
We demonstrate how educational AI technologies are bound up in and reproduce historical legacies of structural injustice and inequity, regardless of the parity of their models' performance.
arXiv Detail & Related papers (2021-05-18T22:13:35Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.