Designing a Web Application for Simple and Collaborative Video
Annotation That Meets Teaching Routines and Educational Requirements
- URL: http://arxiv.org/abs/2105.04022v1
- Date: Sun, 9 May 2021 21:02:19 GMT
- Title: Designing a Web Application for Simple and Collaborative Video
Annotation That Meets Teaching Routines and Educational Requirements
- Authors: Daniel Klug, Elke Schlote
- Abstract summary: We develop TRAVIS GO, a web application for simple and collaborative video annotation.
TRAVIS GO allows for quick and easy use within established teaching settings.
Key didactic features include tagging and commenting on posts, sharing and exporting projects, and working in live collaboration.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Video annotation and analysis is an important activity for teaching with and
about audiovisual media artifacts because it helps students to learn how to
identify textual and formal connections in media products. But school teachers
lack adequate tools for video annotation and analysis in media education that
are easy-to-use, integrate into established teaching organization, and support
quick collaborative work. To address these challenges, we followed a
design-based research approach and conducted qualitative interviews with
teachers to develop TRAVIS GO, a web application for simple and collaborative
video annotation. TRAVIS GO allows for quick and easy use within established
teaching settings. The web application provides basic analytical features in an
adaptable work space. Key didactic features include tagging and commenting on
posts, sharing and exporting projects, and working in live collaboration.
Teachers can create assignments according to grade level, learning subject, and
class size. Our work contributes further insights for the CSCW community about
how to implement user demands into developing educational tools.
Related papers
- OnDiscuss: An Epistemic Network Analysis Learning Analytics Visualization Tool for Evaluating Asynchronous Online Discussions [0.49998148477760973]
OnDiscuss is a learning analytics visualization tool for instructors that utilize text mining algorithms and Epistemic Network Analysis (ENA)
Text mining is used to generate an initial codebook for the instructor as well as automatically code the data.
This tool allows instructors to edit their codebook and then dynamically view the resulting ENA networks for the entire class and individual students.
arXiv Detail & Related papers (2024-08-19T21:23:11Z) - Tutorly: Turning Programming Videos Into Apprenticeship Learning Environments with LLMs [1.6961276655027102]
Our work transforms programming videos into one-on-one tutoring experiences using the cognitive apprenticeship framework.
Tutorly, developed as a JupyterLab, allows learners to set personalized learning goals.
arXiv Detail & Related papers (2024-05-21T17:17:34Z) - Large Language Model-Driven Classroom Flipping: Empowering
Student-Centric Peer Questioning with Flipped Interaction [3.1473798197405953]
This paper investigates a pedagogical approach of classroom flipping based on flipped interaction in large language models.
Flipped interaction involves using language models to prioritize generating questions instead of answers to prompts.
We propose a workflow to integrate prompt engineering with clicker and JiTT quizzes by a poll-prompt-quiz routine and a quiz-prompt-discuss routine.
arXiv Detail & Related papers (2023-11-14T15:48:19Z) - CodeHelp: Using Large Language Models with Guardrails for Scalable
Support in Programming Classes [2.5949084781328744]
Large language models (LLMs) have emerged recently and show great promise for providing on-demand help at a large scale.
We introduce CodeHelp, a novel LLM-powered tool designed with guardrails to provide on-demand assistance to programming students without directly revealing solutions.
Our findings suggest that CodeHelp is well-received by students who especially value its availability and help with resolving errors, and that for instructors it is easy to deploy and complements, rather than replaces, the support that they provide to students.
arXiv Detail & Related papers (2023-08-14T03:52:24Z) - A large language model-assisted education tool to provide feedback on
open-ended responses [2.624902795082451]
We present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions.
Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement.
arXiv Detail & Related papers (2023-07-25T19:49:55Z) - A Video Is Worth 4096 Tokens: Verbalize Videos To Understand Them In
Zero Shot [67.00455874279383]
We propose verbalizing long videos to generate descriptions in natural language, then performing video-understanding tasks on the generated story as opposed to the original video.
Our method, despite being zero-shot, achieves significantly better results than supervised baselines for video understanding.
To alleviate a lack of story understanding benchmarks, we publicly release the first dataset on a crucial task in computational social science on persuasion strategy identification.
arXiv Detail & Related papers (2023-05-16T19:13:11Z) - MaMMUT: A Simple Architecture for Joint Learning for MultiModal Tasks [59.09343552273045]
We propose a decoder-only model for multimodal tasks, which is surprisingly effective in jointly learning of these disparate vision-language tasks.
We demonstrate that joint learning of these diverse objectives is simple, effective, and maximizes the weight-sharing of the model across these tasks.
Our model achieves the state of the art on image-text and text-image retrieval, video question answering and open-vocabulary detection tasks, outperforming much larger and more extensively trained foundational models.
arXiv Detail & Related papers (2023-03-29T16:42:30Z) - Multimodal Lecture Presentations Dataset: Understanding Multimodality in
Educational Slides [57.86931911522967]
We test the capabilities of machine learning models in multimodal understanding of educational content.
Our dataset contains aligned slides and spoken language, for 180+ hours of video and 9000+ slides, with 10 lecturers from various subjects.
We introduce PolyViLT, a multimodal transformer trained with a multi-instance learning loss that is more effective than current approaches.
arXiv Detail & Related papers (2022-08-17T05:30:18Z) - Transcript to Video: Efficient Clip Sequencing from Texts [65.87890762420922]
We present Transcript-to-Video -- a weakly-supervised framework that uses texts as input to automatically create video sequences from an extensive collection of shots.
Specifically, we propose a Content Retrieval Module and a Temporal Coherent Module to learn visual-language representations and model shot sequencing styles.
For fast inference, we introduce an efficient search strategy for real-time video clip sequencing.
arXiv Detail & Related papers (2021-07-25T17:24:50Z) - Neural Multi-Task Learning for Teacher Question Detection in Online
Classrooms [50.19997675066203]
We build an end-to-end neural framework that automatically detects questions from teachers' audio recordings.
By incorporating multi-task learning techniques, we are able to strengthen the understanding of semantic relations among different types of questions.
arXiv Detail & Related papers (2020-05-16T02:17:04Z) - Object Relational Graph with Teacher-Recommended Learning for Video
Captioning [92.48299156867664]
We propose a complete video captioning system including both a novel model and an effective training strategy.
Specifically, we propose an object relational graph (ORG) based encoder, which captures more detailed interaction features to enrich visual representation.
Meanwhile, we design a teacher-recommended learning (TRL) method to make full use of the successful external language model (ELM) to integrate the abundant linguistic knowledge into the caption model.
arXiv Detail & Related papers (2020-02-26T15:34:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.