Tutorly: Turning Programming Videos Into Apprenticeship Learning Environments with LLMs
- URL: http://arxiv.org/abs/2405.12946v1
- Date: Tue, 21 May 2024 17:17:34 GMT
- Title: Tutorly: Turning Programming Videos Into Apprenticeship Learning Environments with LLMs
- Authors: Wengxi Li, Roy Pea, Nick Haber, Hari Subramonyam,
- Abstract summary: Our work transforms programming videos into one-on-one tutoring experiences using the cognitive apprenticeship framework.
Tutorly, developed as a JupyterLab, allows learners to set personalized learning goals.
- Score: 1.6961276655027102
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online programming videos, including tutorials and streamcasts, are widely popular and contain a wealth of expert knowledge. However, effectively utilizing these resources to achieve targeted learning goals can be challenging. Unlike direct tutoring, video content lacks tailored guidance based on individual learning paces, personalized feedback, and interactive engagement necessary for support and monitoring. Our work transforms programming videos into one-on-one tutoring experiences using the cognitive apprenticeship framework. Tutorly, developed as a JupyterLab Plugin, allows learners to (1) set personalized learning goals, (2) engage in learning-by-doing through a conversational LLM-based mentor agent, (3) receive guidance and feedback based on a student model that steers the mentor moves. In a within-subject study with 16 participants learning exploratory data analysis from a streamcast, Tutorly significantly improved their performance from 61.9% to 76.6% based on a post-test questionnaire. Tutorly demonstrates the potential for enhancing programming video learning experiences with LLM and learner modeling.
Related papers
- Language-Model-Assisted Bi-Level Programming for Reward Learning from Internet Videos [48.2044649011213]
We introduce a language-model-assisted bi-level programming framework that enables a reinforcement learning agent to learn its reward from internet videos.
The framework includes two levels: an upper level where a vision-language model (VLM) provides feedback by comparing the learner's behavior with expert videos, and a lower level where a large language model (LLM) translates this feedback into reward updates.
We validate the method for reward learning from YouTube videos, and the results have shown that the proposed method enables efficient reward design from expert videos of biological agents.
arXiv Detail & Related papers (2024-10-11T22:31:39Z) - Investigating Developers' Preferences for Learning and Issue Resolution Resources in the ChatGPT Era [1.3124513975412255]
The landscape of software developer learning resources has continuously evolved, with recent trends favoring engaging formats like video tutorials.
The emergence of Large Language Models (LLMs) like ChatGPT presents a new learning paradigm.
We conducted a survey targeting software developers and computer science students, gathering 341 responses, of which 268 were completed and analyzed.
arXiv Detail & Related papers (2024-10-10T22:57:29Z) - Is the Lecture Engaging for Learning? Lecture Voice Sentiment Analysis for Knowledge Graph-Supported Intelligent Lecturing Assistant (ILA) System [0.060227699034124595]
The system is designed to support instructors in enhancing student learning through real-time analysis of voice, content, and teaching methods.
We present a case study on lecture voice sentiment analysis, in which we developed a training set comprising over 3,000 one-minute lecture voice clips.
Our ultimate goal is to aid instructors in teaching more engagingly and effectively by leveraging modern artificial intelligence techniques.
arXiv Detail & Related papers (2024-08-20T02:22:27Z) - ExpertAF: Expert Actionable Feedback from Video [81.46431188306397]
We introduce a novel method to generate actionable feedback from video of a person doing a physical activity.
Our method takes a video demonstration and its accompanying 3D body pose and generates expert commentary.
Our method is able to reason across multi-modal input combinations to output full-spectrum, actionable coaching.
arXiv Detail & Related papers (2024-08-01T16:13:07Z) - Experiential Co-Learning of Software-Developing Agents [83.34027623428096]
Large language models (LLMs) have brought significant changes to various domains, especially in software development.
We introduce Experiential Co-Learning, a novel LLM-agent learning framework.
Experiments demonstrate that the framework enables agents to tackle unseen software-developing tasks more effectively.
arXiv Detail & Related papers (2023-12-28T13:50:42Z) - Large Language Model-Driven Classroom Flipping: Empowering
Student-Centric Peer Questioning with Flipped Interaction [3.1473798197405953]
This paper investigates a pedagogical approach of classroom flipping based on flipped interaction in large language models.
Flipped interaction involves using language models to prioritize generating questions instead of answers to prompts.
We propose a workflow to integrate prompt engineering with clicker and JiTT quizzes by a poll-prompt-quiz routine and a quiz-prompt-discuss routine.
arXiv Detail & Related papers (2023-11-14T15:48:19Z) - InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding
and Generation [90.71796406228265]
InternVid is a large-scale video-centric multimodal dataset that enables learning powerful and transferable video-text representations.
The InternVid dataset contains over 7 million videos lasting nearly 760K hours, yielding 234M video clips accompanied by detailed descriptions of total 4.1B words.
arXiv Detail & Related papers (2023-07-13T17:58:32Z) - Self-Supervised Learning for Videos: A Survey [70.37277191524755]
Self-supervised learning has shown promise in both image and video domains.
In this survey, we provide a review of existing approaches on self-supervised learning focusing on the video domain.
arXiv Detail & Related papers (2022-06-18T00:26:52Z) - Motivating Learners in Multi-Orchestrator Mobile Edge Learning: A
Stackelberg Game Approach [54.28419430315478]
Mobile Edge Learning enables distributed training of Machine Learning models over heterogeneous edge devices.
In MEL, the training performance deteriorates without the availability of sufficient training data or computing resources.
We propose an incentive mechanism, where we formulate the orchestrators-learners interactions as a 2-round Stackelberg game.
arXiv Detail & Related papers (2021-09-25T17:27:48Z) - Designing a Web Application for Simple and Collaborative Video
Annotation That Meets Teaching Routines and Educational Requirements [0.0]
We develop TRAVIS GO, a web application for simple and collaborative video annotation.
TRAVIS GO allows for quick and easy use within established teaching settings.
Key didactic features include tagging and commenting on posts, sharing and exporting projects, and working in live collaboration.
arXiv Detail & Related papers (2021-05-09T21:02:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.