Tutorly: Turning Programming Videos Into Apprenticeship Learning Environments with LLMs
- URL: http://arxiv.org/abs/2405.12946v1
- Date: Tue, 21 May 2024 17:17:34 GMT
- Title: Tutorly: Turning Programming Videos Into Apprenticeship Learning Environments with LLMs
- Authors: Wengxi Li, Roy Pea, Nick Haber, Hari Subramonyam,
- Abstract summary: Our work transforms programming videos into one-on-one tutoring experiences using the cognitive apprenticeship framework.
Tutorly, developed as a JupyterLab, allows learners to set personalized learning goals.
- Score: 1.6961276655027102
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online programming videos, including tutorials and streamcasts, are widely popular and contain a wealth of expert knowledge. However, effectively utilizing these resources to achieve targeted learning goals can be challenging. Unlike direct tutoring, video content lacks tailored guidance based on individual learning paces, personalized feedback, and interactive engagement necessary for support and monitoring. Our work transforms programming videos into one-on-one tutoring experiences using the cognitive apprenticeship framework. Tutorly, developed as a JupyterLab Plugin, allows learners to (1) set personalized learning goals, (2) engage in learning-by-doing through a conversational LLM-based mentor agent, (3) receive guidance and feedback based on a student model that steers the mentor moves. In a within-subject study with 16 participants learning exploratory data analysis from a streamcast, Tutorly significantly improved their performance from 61.9% to 76.6% based on a post-test questionnaire. Tutorly demonstrates the potential for enhancing programming video learning experiences with LLM and learner modeling.
Related papers
- Self-Evolving GPT: A Lifelong Autonomous Experiential Learner [40.16716983217304]
We design a lifelong autonomous experiential learning framework based on large language models (LLMs)
It autonomously learns and accumulates experience through experience transfer and induction, categorizing the types of input questions to select which accumulated experience to employ for them.
Experimental results on six widely used NLP datasets show that our framework performs reliably in each intermediate step and effectively improves the performance of GPT-3.5 and GPT-4.
arXiv Detail & Related papers (2024-07-12T02:49:13Z) - Experiential Co-Learning of Software-Developing Agents [83.34027623428096]
Large language models (LLMs) have brought significant changes to various domains, especially in software development.
We introduce Experiential Co-Learning, a novel LLM-agent learning framework.
Experiments demonstrate that the framework enables agents to tackle unseen software-developing tasks more effectively.
arXiv Detail & Related papers (2023-12-28T13:50:42Z) - Next-Step Hint Generation for Introductory Programming Using Large
Language Models [0.8002196839441036]
Large Language Models possess skills such as answering questions, writing essays or solving programming exercises.
This work explores how LLMs can contribute to programming education by supporting students with automated next-step hints.
arXiv Detail & Related papers (2023-12-03T17:51:07Z) - Large Language Model-Driven Classroom Flipping: Empowering
Student-Centric Peer Questioning with Flipped Interaction [3.1473798197405953]
This paper investigates a pedagogical approach of classroom flipping based on flipped interaction in large language models.
Flipped interaction involves using language models to prioritize generating questions instead of answers to prompts.
We propose a workflow to integrate prompt engineering with clicker and JiTT quizzes by a poll-prompt-quiz routine and a quiz-prompt-discuss routine.
arXiv Detail & Related papers (2023-11-14T15:48:19Z) - InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding
and Generation [90.71796406228265]
InternVid is a large-scale video-centric multimodal dataset that enables learning powerful and transferable video-text representations.
The InternVid dataset contains over 7 million videos lasting nearly 760K hours, yielding 234M video clips accompanied by detailed descriptions of total 4.1B words.
arXiv Detail & Related papers (2023-07-13T17:58:32Z) - PLAR: Prompt Learning for Action Recognition [56.57236976757388]
We present a new general learning approach, Prompt Learning for Action Recognition (PLAR)
Our approach is designed to predict the action label by helping the models focus on the descriptions or instructions associated with actions in the input videos.
We observe a 3.110-7.2% accuracy improvement on the aerial multi-agent dataset Okutamam and a 1.0-3.6% improvement on the ground camera single-agent dataset Something Something V2.
arXiv Detail & Related papers (2023-05-21T11:51:09Z) - Self-Supervised Learning for Videos: A Survey [70.37277191524755]
Self-supervised learning has shown promise in both image and video domains.
In this survey, we provide a review of existing approaches on self-supervised learning focusing on the video domain.
arXiv Detail & Related papers (2022-06-18T00:26:52Z) - DMCNet: Diversified Model Combination Network for Understanding
Engagement from Video Screengrabs [0.4397520291340695]
Engagement plays a major role in developing intelligent educational interfaces.
Non-deep learning models are based on the combination of popular algorithms such as Histogram of Oriented Gradient (HOG), Support Vector Machine (SVM), Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF)
The deep learning methods include Densely Connected Convolutional Networks (DenseNet-121), Residual Network (ResNet-18) and MobileNetV1.
arXiv Detail & Related papers (2022-04-13T15:24:38Z) - Motivating Learners in Multi-Orchestrator Mobile Edge Learning: A
Stackelberg Game Approach [54.28419430315478]
Mobile Edge Learning enables distributed training of Machine Learning models over heterogeneous edge devices.
In MEL, the training performance deteriorates without the availability of sufficient training data or computing resources.
We propose an incentive mechanism, where we formulate the orchestrators-learners interactions as a 2-round Stackelberg game.
arXiv Detail & Related papers (2021-09-25T17:27:48Z) - Designing a Web Application for Simple and Collaborative Video
Annotation That Meets Teaching Routines and Educational Requirements [0.0]
We develop TRAVIS GO, a web application for simple and collaborative video annotation.
TRAVIS GO allows for quick and easy use within established teaching settings.
Key didactic features include tagging and commenting on posts, sharing and exporting projects, and working in live collaboration.
arXiv Detail & Related papers (2021-05-09T21:02:19Z) - Reinforcement Learning with Videos: Combining Offline Observations with
Interaction [151.73346150068866]
Reinforcement learning is a powerful framework for robots to acquire skills from experience.
Videos of humans are a readily available source of broad and interesting experiences.
We propose a framework for reinforcement learning with videos.
arXiv Detail & Related papers (2020-11-12T17:15:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.