Understanding Student and Academic Staff Perceptions of AI Use in Assessment and Feedback
- URL: http://arxiv.org/abs/2406.15808v1
- Date: Sat, 22 Jun 2024 10:25:01 GMT
- Title: Understanding Student and Academic Staff Perceptions of AI Use in Assessment and Feedback
- Authors: Jasper Roe, Mike Perkins, Daniel Ruelle,
- Abstract summary: The rise of Artificial Intelligence (AI) and Generative Artificial Intelligence (GenAI) in higher education necessitates assessment reform.
This study addresses a critical gap by exploring student and academic staff experiences with AI and GenAI tools.
An online survey collected data from 35 academic staff and 282 students across two universities in Vietnam and one in Singapore.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The rise of Artificial Intelligence (AI) and Generative Artificial Intelligence (GenAI) in higher education necessitates assessment reform. This study addresses a critical gap by exploring student and academic staff experiences with AI and GenAI tools, focusing on their familiarity and comfort with current and potential future applications in learning and assessment. An online survey collected data from 35 academic staff and 282 students across two universities in Vietnam and one in Singapore, examining GenAI familiarity, perceptions of its use in assessment marking and feedback, knowledge checking and participation, and experiences of GenAI text detection. Descriptive statistics and reflexive thematic analysis revealed a generally low familiarity with GenAI among both groups. GenAI feedback was viewed negatively; however, it was viewed more positively when combined with instructor feedback. Academic staff were more accepting of GenAI text detection tools and grade adjustments based on detection results compared to students. Qualitative analysis identified three themes: unclear understanding of text detection tools, variability in experiences with GenAI detectors, and mixed feelings about GenAI's future impact on educational assessment. These findings have major implications regarding the development of policies and practices for GenAI-enabled assessment and feedback in higher education.
Related papers
- Dimensions of Generative AI Evaluation Design [51.541816010127256]
We propose a set of general dimensions that capture critical choices involved in GenAI evaluation design.
These dimensions include the evaluation setting, the task type, the input source, the interaction style, the duration, the metric type, and the scoring method.
arXiv Detail & Related papers (2024-11-19T18:25:30Z) - Early Adoption of Generative Artificial Intelligence in Computing Education: Emergent Student Use Cases and Perspectives in 2023 [38.83649319653387]
There is limited prior research on computing students' use and perceptions of GenAI.
We surveyed all computer science majors in a small engineering-focused R1 university.
We discuss the impact of our findings on the emerging conversation around GenAI and education.
arXiv Detail & Related papers (2024-11-17T20:17:47Z) - Generative AI and Agency in Education: A Critical Scoping Review and Thematic Analysis [0.0]
This review examines the relationship between Generative AI (GenAI) and agency in education, analyzing the literature available through the lens of Critical Digital Pedagogy.
We conducted an AI-supported hybrid thematic analysis that revealed three key themes: Control in Digital Spaces, Variable Engagement and Access, and Changing Notions of Agency.
The findings suggest that while GenAI may enhance learner agency through personalization and support, it also risks exacerbating educational inequalities and diminishing learner autonomy in certain contexts.
arXiv Detail & Related papers (2024-11-01T14:40:31Z) - A Meta-analysis of College Students' Intention to Use Generative Artificial Intelligence [5.13644976086965]
This study conducted a meta-analysis of 27 empirical studies under an integrated theoretical framework.
Main variables are strongly correlated with students' behavioural intention to use GenAI.
Gender, notably, only moderated attitudes on students' behavioural intention to use GenAI.
arXiv Detail & Related papers (2024-08-25T15:46:57Z) - Higher education assessment practice in the era of generative AI tools [0.37282630026096586]
This study experimented using three assessment instruments from data science, data analytics, and construction management disciplines.
Our findings revealed that GenAI tools exhibit subject knowledge, problem-solving, analytical, critical thinking, and presentation skills.
Based on our findings, we made recommendations on how AI tools can be utilised for teaching and learning in HE.
arXiv Detail & Related papers (2024-04-01T10:43:50Z) - How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey [48.97104365617498]
The emerging area of em Explainable Interfaces (EIs) focuses on the user interface and user experience design aspects of XAI.
This paper presents a systematic survey of 53 publications to identify current trends in human-XAI interaction and promising directions for EI design and development.
arXiv Detail & Related papers (2024-03-21T15:44:56Z) - The AI generation gap: Are Gen Z students more interested in adopting
generative AI such as ChatGPT in teaching and learning than their Gen X and
Millennial Generation teachers? [0.0]
Gen Z students were generally optimistic about the potential benefits of generative AI (GenAI)
Gen X and Gen Y teachers expressed heightened concerns about overreliance, ethical and pedagogical implications.
arXiv Detail & Related papers (2023-05-04T14:42:06Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - AI Explainability 360: Impact and Design [120.95633114160688]
In 2019, we created AI Explainability 360 (Arya et al. 2020), an open source software toolkit featuring ten diverse and state-of-the-art explainability methods.
This paper examines the impact of the toolkit with several case studies, statistics, and community feedback.
The paper also describes the flexible design of the toolkit, examples of its use, and the significant educational material and documentation available to its users.
arXiv Detail & Related papers (2021-09-24T19:17:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.