Generative Teaching via Code
- URL: http://arxiv.org/abs/2601.04204v1
- Date: Sun, 07 Dec 2025 12:52:24 GMT
- Title: Generative Teaching via Code
- Authors: Yuheng Wang, Runde Yang, Lin Wu, Jie Zhang, Jingru Fan, Ruoyu Fu, Tianle Zhou, Huatao Li, Siheng Chen, Weinan E, Chen Qian,
- Abstract summary: TeachMaster orchestrates a collaborative team of agents--spanning planning, design, and rendering--to automate the production of interpretable, editable, and curriculum-ready educational videos.<n>Experiments validate that TeachMaster significantly boosts production efficiency without compromising structural coherence or visual fidelity.
- Score: 43.45801837806965
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The scalability of high-quality online education is hindered by the high costs and slow cycles of labor-intensive manual content creation. Despite advancements in video generation, current approaches often fail to ensure pedagogical structure and precise control due to their pixel-level, black-box nature. In this paper, we propose Generative Teaching, a novel paradigm that transitions educators from manual creators to high-level directors, allowing them to focus on pedagogical intent while autonomous agents handle the execution. To realize this vision, we introduce TeachMaster, a multi-agent framework that leverages code as an intermediate semantic medium. Unlike traditional video generation methods, TeachMaster orchestrates a collaborative team of agents--spanning planning, design, and rendering--to automate the production of interpretable, editable, and curriculum-ready educational videos. Experiments validate that TeachMaster significantly boosts production efficiency without compromising structural coherence or visual fidelity, providing a robust solution for scalable education.
Related papers
- PedaCo-Gen: Scaffolding Pedagogical Agency in Human-AI Collaborative Video Authoring [28.634225905526677]
This study introduces PedaCo-Gen, a collaborative video generating system for authoring instructional videos based on Mayer's Cognitive Theory of Multimedia Learning (CTML)<n>Moving away from traditional "one-shot" generation, PedaCo-Gen introduces an Intermediate Representation phase, enabling educators to interactively review and refine video blueprints-comprising scripts and visual descriptions-with an AI reviewer.<n>Our study with 23 education experts demonstrates that PedaCo-Gen significantly enhances video quality across various topics and CTML principles compared to baselines.
arXiv Detail & Related papers (2026-02-23T09:12:13Z) - Beyond End-to-End Video Models: An LLM-Based Multi-Agent System for Educational Video Generation [15.004606775581356]
LAVES is a hierarchical multi-agent system for generating high-quality instructional videos from educational problems.<n>In large-scale deployments, LAVES achieves a throughput exceeding one million videos per day, delivering over a 95% reduction in cost.
arXiv Detail & Related papers (2026-02-12T10:14:36Z) - Bridging Your Imagination with Audio-Video Generation via a Unified Director [54.45375287950375]
We argue that logical reasoning and imaginative thinking are both fundamental qualities of a film director.<n>We propose UniMAGE, a unified director model that bridges user prompts with well-structured scripts.
arXiv Detail & Related papers (2025-12-29T05:56:22Z) - Addressing Situated Teaching Needs: A Multi-Agent Framework for Automated Slide Adaptation [23.899556307948135]
We introduce a novel multi-agent framework designed to automate slide adaptation based on instructor specifications.<n>An evaluation involving 16 modification requests across 8 real-world courses validates our approach.<n>This work heralds a new paradigm where AI agents handle the logistical burdens of instructional design, liberating educators to focus on the creative and strategic aspects of teaching.
arXiv Detail & Related papers (2025-11-24T07:22:41Z) - Code2Video: A Code-centric Paradigm for Educational Video Generation [60.03043132859077]
We propose Code2Video, a code-centric agent framework for generating educational videos via Python code.<n>The framework comprises three collaborative agents: (i) Planner, which structures lecture content into temporally coherent flows; (ii) Coder, which converts structured instructions into executable Python codes while incorporating scope-guided auto-fix to enhance efficiency; and (iii) Critic, which leverages vision-language models (VLM) with visual anchor prompts to refine spatial layout and ensure clarity.<n>Our results demonstrate the potential of Code2Video as a scalable, interpretable, and controllable approach, achieving 40% improvement over direct code
arXiv Detail & Related papers (2025-10-01T17:56:48Z) - Instructional Agents: LLM Agents on Automated Course Material Generation for Teaching Faculties [3.045939700894802]
We present Instructional Agents, a framework designed to automate end-to-end course material generation.<n>The framework simulates role-based collaboration among educational agents to produce cohesive and pedagogically aligned content.<n>It produces high-quality instructional materials while significantly reducing development time and human workload.
arXiv Detail & Related papers (2025-08-27T06:45:06Z) - Enabling Multi-Agent Systems as Learning Designers: Applying Learning Sciences to AI Instructional Design [6.080614844688028]
This study shifts pedagogical expertise from the user's prompt to the LLM's internal architecture.<n>We tested three systems for generating secondary Math and Science learning activities.
arXiv Detail & Related papers (2025-08-20T14:44:00Z) - Janus-Pro-R1: Advancing Collaborative Visual Comprehension and Generation via Reinforcement Learning [92.57052246970254]
We propose to enable the collaborative co-evolution of visual comprehension and generation.<n>We introduce a two-stage training approach: supervised fine-tuning teaches the MLLM with the foundational ability to generate genuine CoT.<n>We unlock the Aha moment in visual generation, advancing MLLMs from text-to-image tasks to unified image generation.
arXiv Detail & Related papers (2025-06-02T09:39:28Z) - GenDoP: Auto-regressive Camera Trajectory Generation as a Director of Photography [98.28272367169465]
We introduce an auto-regressive model inspired by the expertise of Directors of Photography to generate artistic and expressive camera trajectories.<n>Thanks to the comprehensive and diverse database, we train an auto-regressive, decoder-only Transformer for high-quality, context-aware camera movement generation.<n>Experiments demonstrate that compared to existing methods, GenDoP offers better controllability, finer-grained trajectory adjustments, and higher motion stability.
arXiv Detail & Related papers (2025-04-09T17:56:01Z) - ARCHED: A Human-Centered Framework for Transparent, Responsible, and Collaborative AI-Assisted Instructional Design [10.99360129432492]
ARCHED is a framework that ensures human educators remain central in the design process while leveraging AI capabilities.<n>The framework integrates specialized AI agents - one generating diverse pedagogical options and another evaluating alignment with learning objectives.<n> Empirical evaluations demonstrate that ARCHED enhances instructional design quality while preserving educator oversight, marking a step forward in responsible AI integration in education.
arXiv Detail & Related papers (2025-03-11T22:19:46Z) - MathTutorBench: A Benchmark for Measuring Open-ended Pedagogical Capabilities of LLM Tutors [82.91830877219822]
We present MathTutorBench, an open-source benchmark for holistic tutoring model evaluation.<n>MathTutorBench contains datasets and metrics that broadly cover tutor abilities as defined by learning sciences research in dialog-based teaching.<n>We evaluate a wide set of closed- and open-weight models and find that subject expertise, indicated by solving ability, does not immediately translate to good teaching.
arXiv Detail & Related papers (2025-02-26T08:43:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.