Tutorly: Turning Programming Videos Into Apprenticeship Learning Environments with LLMs
- URL: http://arxiv.org/abs/2405.12946v1
- Date: Tue, 21 May 2024 17:17:34 GMT
- Title: Tutorly: Turning Programming Videos Into Apprenticeship Learning Environments with LLMs
- Authors: Wengxi Li, Roy Pea, Nick Haber, Hari Subramonyam,
- Abstract summary: Our work transforms programming videos into one-on-one tutoring experiences using the cognitive apprenticeship framework.
Tutorly, developed as a JupyterLab, allows learners to set personalized learning goals.
- Score: 1.6961276655027102
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online programming videos, including tutorials and streamcasts, are widely popular and contain a wealth of expert knowledge. However, effectively utilizing these resources to achieve targeted learning goals can be challenging. Unlike direct tutoring, video content lacks tailored guidance based on individual learning paces, personalized feedback, and interactive engagement necessary for support and monitoring. Our work transforms programming videos into one-on-one tutoring experiences using the cognitive apprenticeship framework. Tutorly, developed as a JupyterLab Plugin, allows learners to (1) set personalized learning goals, (2) engage in learning-by-doing through a conversational LLM-based mentor agent, (3) receive guidance and feedback based on a student model that steers the mentor moves. In a within-subject study with 16 participants learning exploratory data analysis from a streamcast, Tutorly significantly improved their performance from 61.9% to 76.6% based on a post-test questionnaire. Tutorly demonstrates the potential for enhancing programming video learning experiences with LLM and learner modeling.
Related papers
- LLM-powered Multi-agent Framework for Goal-oriented Learning in Intelligent Tutoring System [54.71619734800526]
GenMentor is a multi-agent framework designed to deliver goal-oriented, personalized learning within ITS.
It maps learners' goals to required skills using a fine-tuned LLM trained on a custom goal-to-skill dataset.
GenMentor tailors learning content with an exploration-drafting-integration mechanism to align with individual learner needs.
arXiv Detail & Related papers (2025-01-27T03:29:44Z) - Language-Model-Assisted Bi-Level Programming for Reward Learning from Internet Videos [48.2044649011213]
We introduce a language-model-assisted bi-level programming framework that enables a reinforcement learning agent to learn its reward from internet videos.
The framework includes two levels: an upper level where a vision-language model (VLM) provides feedback by comparing the learner's behavior with expert videos, and a lower level where a large language model (LLM) translates this feedback into reward updates.
We validate the method for reward learning from YouTube videos, and the results have shown that the proposed method enables efficient reward design from expert videos of biological agents.
arXiv Detail & Related papers (2024-10-11T22:31:39Z) - Investigating Developers' Preferences for Learning and Issue Resolution Resources in the ChatGPT Era [1.3124513975412255]
The landscape of software developer learning resources has continuously evolved, with recent trends favoring engaging formats like video tutorials.
The emergence of Large Language Models (LLMs) like ChatGPT presents a new learning paradigm.
We conducted a survey targeting software developers and computer science students, gathering 341 responses, of which 268 were completed and analyzed.
arXiv Detail & Related papers (2024-10-10T22:57:29Z) - Exploring Knowledge Tracing in Tutor-Student Dialogues using LLMs [49.18567856499736]
We investigate whether large language models (LLMs) can be supportive of open-ended dialogue tutoring.
We apply a range of knowledge tracing (KT) methods on the resulting labeled data to track student knowledge levels over an entire dialogue.
We conduct experiments on two tutoring dialogue datasets, and show that a novel yet simple LLM-based method, LLMKT, significantly outperforms existing KT methods in predicting student response correctness in dialogues.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - Is the Lecture Engaging for Learning? Lecture Voice Sentiment Analysis for Knowledge Graph-Supported Intelligent Lecturing Assistant (ILA) System [0.060227699034124595]
The system is designed to support instructors in enhancing student learning through real-time analysis of voice, content, and teaching methods.
We present a case study on lecture voice sentiment analysis, in which we developed a training set comprising over 3,000 one-minute lecture voice clips.
Our ultimate goal is to aid instructors in teaching more engagingly and effectively by leveraging modern artificial intelligence techniques.
arXiv Detail & Related papers (2024-08-20T02:22:27Z) - ExpertAF: Expert Actionable Feedback from Video [81.46431188306397]
Current methods for skill-assessment from video only provide scores or compare demonstrations.
We introduce a novel method to generate actionable feedback from video of a person doing a physical activity.
Our method is able to reason across multi-modal input combinations to output full-spectrum, actionable coaching.
arXiv Detail & Related papers (2024-08-01T16:13:07Z) - Experiential Co-Learning of Software-Developing Agents [83.34027623428096]
Large language models (LLMs) have brought significant changes to various domains, especially in software development.
We introduce Experiential Co-Learning, a novel LLM-agent learning framework.
Experiments demonstrate that the framework enables agents to tackle unseen software-developing tasks more effectively.
arXiv Detail & Related papers (2023-12-28T13:50:42Z) - Self-Supervised Learning for Videos: A Survey [70.37277191524755]
Self-supervised learning has shown promise in both image and video domains.
In this survey, we provide a review of existing approaches on self-supervised learning focusing on the video domain.
arXiv Detail & Related papers (2022-06-18T00:26:52Z) - Designing a Web Application for Simple and Collaborative Video
Annotation That Meets Teaching Routines and Educational Requirements [0.0]
We develop TRAVIS GO, a web application for simple and collaborative video annotation.
TRAVIS GO allows for quick and easy use within established teaching settings.
Key didactic features include tagging and commenting on posts, sharing and exporting projects, and working in live collaboration.
arXiv Detail & Related papers (2021-05-09T21:02:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.