Adult learners recall and recognition performance and affective feedback when learning from an AI-generated synthetic video
- URL: http://arxiv.org/abs/2412.10384v1
- Date: Thu, 28 Nov 2024 21:40:28 GMT
- Title: Adult learners recall and recognition performance and affective feedback when learning from an AI-generated synthetic video
- Authors: Zoe Ruo-Yu Li, Caswell Barry, Mutlu Cukurova,
- Abstract summary: The current study recruited 500 participants to investigate adult learners recall and recognition performances as well as their affective feedback on the AI-generated synthetic video.<n>The results indicated no statistically significant difference amongst conditions on recall and recognition performance.<n>However, adult learners preferred to learn from the video formats rather than text materials.
- Score: 1.7742433461734404
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The widespread use of generative AI has led to multiple applications of AI-generated text and media to potentially enhance learning outcomes. However, there are a limited number of well-designed experimental studies investigating the impact of learning gains and affective feedback from AI-generated media compared to traditional media (e.g., text from documents and human recordings of video). The current study recruited 500 participants to investigate adult learners recall and recognition performances as well as their affective feedback on the AI-generated synthetic video, using a mixed-methods approach with a pre-and post-test design. Specifically, four learning conditions, AI-generated framing of human instructor-generated text, AI-generated synthetic videos with human instructor-generated text, human instructor-generated videos, and human instructor-generated text frame (baseline), were considered. The results indicated no statistically significant difference amongst conditions on recall and recognition performance. In addition, the participants affective feedback was not statistically significantly different between the two video conditions. However, adult learners preferred to learn from the video formats rather than text materials.
Related papers
- GRADEO: Towards Human-Like Evaluation for Text-to-Video Generation via Multi-Step Reasoning [62.775721264492994]
GRADEO is one of the first specifically designed video evaluation models.
It grades AI-generated videos for explainable scores and assessments through multi-step reasoning.
Experiments show that our method aligns better with human evaluations than existing methods.
arXiv Detail & Related papers (2025-03-04T07:04:55Z) - Approaching the Limits to EFL Writing Enhancement with AI-generated Text and Diverse Learners [3.2668433085737036]
Students can compose texts by integrating their own words with AI-generated text.
This study investigated how 59 Hong Kong secondary school students interacted with AI-generated text to compose a feature article.
arXiv Detail & Related papers (2025-03-01T06:29:00Z) - Generative Ghost: Investigating Ranking Bias Hidden in AI-Generated Videos [106.5804660736763]
Video information retrieval remains a fundamental approach for accessing video content.
We build on the observation that retrieval models often favor AI-generated content in ad-hoc and image retrieval tasks.
We investigate whether similar biases emerge in the context of challenging video retrieval.
arXiv Detail & Related papers (2025-02-11T07:43:47Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - MindSpeech: Continuous Imagined Speech Decoding using High-Density fNIRS and Prompt Tuning for Advanced Human-AI Interaction [0.0]
This paper reports a novel method for human-AI interaction by developing a direct brain-AI interface.
We discuss a novel AI model, called MindSpeech, which enables open-vocabulary, continuous decoding for imagined speech.
We demonstrate significant improvements in key metrics, such as BLEU-1 and BERT P scores, for three out of four participants.
arXiv Detail & Related papers (2024-07-25T16:39:21Z) - Unmasking Illusions: Understanding Human Perception of Audiovisual Deepfakes [49.81915942821647]
This paper aims to evaluate the human ability to discern deepfake videos through a subjective study.
We present our findings by comparing human observers to five state-ofthe-art audiovisual deepfake detection models.
We found that all AI models performed better than humans when evaluated on the same 40 videos.
arXiv Detail & Related papers (2024-05-07T07:57:15Z) - A Comparative Study of Perceptual Quality Metrics for Audio-driven
Talking Head Videos [81.54357891748087]
We collect talking head videos generated from four generative methods.
We conduct controlled psychophysical experiments on visual quality, lip-audio synchronization, and head movement naturalness.
Our experiments validate consistency between model predictions and human annotations, identifying metrics that align better with human opinions than widely-used measures.
arXiv Detail & Related papers (2024-03-11T04:13:38Z) - Evaluating the Efficacy of Hybrid Deep Learning Models in Distinguishing
AI-Generated Text [0.0]
My research investigates the use of cutting-edge hybrid deep learning models to accurately differentiate between AI-generated text and human writing.
I applied a robust methodology, utilising a carefully selected dataset comprising AI and human texts from various sources, each tagged with instructions.
arXiv Detail & Related papers (2023-11-27T06:26:53Z) - The Imitation Game: Detecting Human and AI-Generated Texts in the Era of
ChatGPT and BARD [3.2228025627337864]
We introduce a novel dataset of human-written and AI-generated texts in different genres.
We employ several machine learning models to classify the texts.
Results demonstrate the efficacy of these models in discerning between human and AI-generated text.
arXiv Detail & Related papers (2023-07-22T21:00:14Z) - A Video Is Worth 4096 Tokens: Verbalize Videos To Understand Them In
Zero Shot [67.00455874279383]
We propose verbalizing long videos to generate descriptions in natural language, then performing video-understanding tasks on the generated story as opposed to the original video.
Our method, despite being zero-shot, achieves significantly better results than supervised baselines for video understanding.
To alleviate a lack of story understanding benchmarks, we publicly release the first dataset on a crucial task in computational social science on persuasion strategy identification.
arXiv Detail & Related papers (2023-05-16T19:13:11Z) - Generative AI for learning: Investigating the potential of synthetic
learning videos [0.6628807224384127]
This research paper explores the utility of using AI-generated synthetic video to create viable educational content for online educational settings.
We examined the impact of using AI-generated synthetic video in an online learning platform on both learners content acquisition and learning experience.
arXiv Detail & Related papers (2023-04-07T12:57:42Z) - Reading and Writing: Discriminative and Generative Modeling for
Self-Supervised Text Recognition [101.60244147302197]
We introduce contrastive learning and masked image modeling to learn discrimination and generation of text images.
Our method outperforms previous self-supervised text recognition methods by 10.2%-20.2% on irregular scene text recognition datasets.
Our proposed text recognizer exceeds previous state-of-the-art text recognition methods by averagely 5.3% on 11 benchmarks, with similar model size.
arXiv Detail & Related papers (2022-07-01T03:50:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.