Generative AI for learning: Investigating the potential of synthetic
learning videos
- URL: http://arxiv.org/abs/2304.03784v2
- Date: Wed, 3 May 2023 20:42:49 GMT
- Title: Generative AI for learning: Investigating the potential of synthetic
learning videos
- Authors: Daniel Leiker, Ashley Ricker Gyllen, Ismail Eldesouky, Mutlu Cukurova
- Abstract summary: This research paper explores the utility of using AI-generated synthetic video to create viable educational content for online educational settings.
We examined the impact of using AI-generated synthetic video in an online learning platform on both learners content acquisition and learning experience.
- Score: 0.6628807224384127
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent advances in generative artificial intelligence (AI) have captured
worldwide attention. Tools such as Dalle-2 and ChatGPT suggest that tasks
previously thought to be beyond the capabilities of AI may now augment the
productivity of creative media in various new ways, including through the
generation of synthetic video. This research paper explores the utility of
using AI-generated synthetic video to create viable educational content for
online educational settings. To date, there is limited research investigating
the real-world educational value of AI-generated synthetic media. To address
this gap, we examined the impact of using AI-generated synthetic video in an
online learning platform on both learners content acquisition and learning
experience. We took a mixed-method approach, randomly assigning adult learners
(n=83) into one of two micro-learning conditions, collecting pre- and
post-learning assessments, and surveying participants on their learning
experience. The control condition included a traditionally produced instructor
video, while the experimental condition included a synthetic video with a
realistic AI-generated character. The results show that learners in both
conditions demonstrated significant improvement from pre- to post-learning
(p<.001), with no significant differences in gains between the two conditions
(p=.80). In addition, no differences were observed in how learners perceived
the traditional and synthetic videos. These findings suggest that AI-generated
synthetic learning videos have the potential to be a viable substitute for
videos produced via traditional methods in online educational settings, making
high quality educational content more accessible across the globe.
Related papers
- How Can Video Generative AI Transform K-12 Education? Examining Teachers' Perspectives through TPACK and TAM [0.7785405821914395]
Video generative AI (Video GenAI) has opened new possibilities for K-12 education by enabling the creation of dynamic, customized, and high-quality visual content.
This study explores the perspectives of leading K-12 teachers on the educational applications of Video GenAI.
arXiv Detail & Related papers (2025-03-11T03:08:07Z) - Generative Ghost: Investigating Ranking Bias Hidden in AI-Generated Videos [106.5804660736763]
Video information retrieval remains a fundamental approach for accessing video content.
We build on the observation that retrieval models often favor AI-generated content in ad-hoc and image retrieval tasks.
We investigate whether similar biases emerge in the context of challenging video retrieval.
arXiv Detail & Related papers (2025-02-11T07:43:47Z) - Immersion for AI: Immersive Learning with Artificial Intelligence [0.0]
This work reflects upon what Immersion can mean from the perspective of an Artificial Intelligence (AI)
Applying the lens of immersive learning theory, it seeks to understand whether this new perspective supports ways for AI participation in cognitive ecologies.
arXiv Detail & Related papers (2025-02-05T11:51:02Z) - VideoWorld: Exploring Knowledge Learning from Unlabeled Videos [119.35107657321902]
This work explores whether a deep generative model can learn complex knowledge solely from visual input.
We develop VideoWorld, an auto-regressive video generation model trained on unlabeled video data, and test its knowledge acquisition abilities in video-based Go and robotic control tasks.
arXiv Detail & Related papers (2025-01-16T18:59:10Z) - Adult learners recall and recognition performance and affective feedback when learning from an AI-generated synthetic video [1.7742433461734404]
The current study recruited 500 participants to investigate adult learners recall and recognition performances as well as their affective feedback on the AI-generated synthetic video.
The results indicated no statistically significant difference amongst conditions on recall and recognition performance.
However, adult learners preferred to learn from the video formats rather than text materials.
arXiv Detail & Related papers (2024-11-28T21:40:28Z) - Video as the New Language for Real-World Decision Making [100.68643056416394]
Video data captures important information about the physical world that is difficult to express in language.
Video can serve as a unified interface that can absorb internet knowledge and represent diverse tasks.
We identify major impact opportunities in domains such as robotics, self-driving, and science.
arXiv Detail & Related papers (2024-02-27T02:05:29Z) - Learning an Actionable Discrete Diffusion Policy via Large-Scale Actionless Video Pre-Training [69.54948297520612]
Learning a generalist embodied agent poses challenges, primarily stemming from the scarcity of action-labeled robotic datasets.
We introduce a novel framework to tackle these challenges, which leverages a unified discrete diffusion to combine generative pre-training on human videos and policy fine-tuning on a small number of action-labeled robot videos.
Our method generates high-fidelity future videos for planning and enhances the fine-tuned policies compared to previous state-of-the-art approaches.
arXiv Detail & Related papers (2024-02-22T09:48:47Z) - Learning by Watching: A Review of Video-based Learning Approaches for
Robot Manipulation [0.0]
Recent works have explored learning manipulation skills by passively watching abundant videos sourced online.
This survey reviews foundations such as video feature representation learning techniques, object affordance understanding, 3D hand/body modeling, and large-scale robot resources.
We discuss how learning only from observing large-scale human videos can enhance generalization and sample efficiency for robotic manipulation.
arXiv Detail & Related papers (2024-02-11T08:41:42Z) - A Survey on Generative AI and LLM for Video Generation, Understanding, and Streaming [26.082980156232086]
Top-trending AI technologies, i.e., generative artificial intelligence (Generative AI) and large language models (LLMs), are reshaping the field of video technology.
The paper highlights the innovative use of these technologies in producing highly realistic videos.
In the realm of video streaming, the paper discusses how LLMs contribute to more efficient and user-centric streaming experiences.
arXiv Detail & Related papers (2024-01-30T14:37:10Z) - ArchiGuesser -- AI Art Architecture Educational Game [0.5919433278490629]
generative AI can create educational content from text, speech, to images based on simple input prompts.
In this paper we present the multisensory educational game ArchiGuesser that combines various AI technologies to serve a single purpose.
arXiv Detail & Related papers (2023-12-14T20:48:26Z) - Mimicking the Maestro: Exploring the Efficacy of a Virtual AI Teacher in
Fine Motor Skill Acquisition [3.07176124710244]
Motor skills, especially fine motor skills like handwriting, play an essential role in academic pursuits and everyday life.
Traditional methods to teach these skills, although effective, can be time-consuming and inconsistent.
We introduce an AI teacher model that captures the distinct characteristics of human instructors.
arXiv Detail & Related papers (2023-10-16T11:11:43Z) - AI-Generated Images as Data Source: The Dawn of Synthetic Era [61.879821573066216]
generative AI has unlocked the potential to create synthetic images that closely resemble real-world photographs.
This paper explores the innovative concept of harnessing these AI-generated images as new data sources.
In contrast to real data, AI-generated data exhibit remarkable advantages, including unmatched abundance and scalability.
arXiv Detail & Related papers (2023-10-03T06:55:19Z) - Multimodal Lecture Presentations Dataset: Understanding Multimodality in
Educational Slides [57.86931911522967]
We test the capabilities of machine learning models in multimodal understanding of educational content.
Our dataset contains aligned slides and spoken language, for 180+ hours of video and 9000+ slides, with 10 lecturers from various subjects.
We introduce PolyViLT, a multimodal transformer trained with a multi-instance learning loss that is more effective than current approaches.
arXiv Detail & Related papers (2022-08-17T05:30:18Z) - Weakly-supervised High-fidelity Ultrasound Video Synthesis with Feature
Decoupling [13.161739586288704]
In clinical practice, analysis and diagnosis often rely on US sequences rather than a single image to obtain dynamic anatomical information.
This is challenging for novices to learn because practicing with adequate videos from patients is clinically unpractical.
We propose a novel framework to synthesize high-fidelity US videos.
arXiv Detail & Related papers (2022-07-01T14:53:22Z) - Self-Supervised Learning for Videos: A Survey [70.37277191524755]
Self-supervised learning has shown promise in both image and video domains.
In this survey, we provide a review of existing approaches on self-supervised learning focusing on the video domain.
arXiv Detail & Related papers (2022-06-18T00:26:52Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.