Learning to Ground Instructional Articles in Videos through Narrations
- URL: http://arxiv.org/abs/2306.03802v1
- Date: Tue, 6 Jun 2023 15:45:53 GMT
- Title: Learning to Ground Instructional Articles in Videos through Narrations
- Authors: Effrosyni Mavroudi, Triantafyllos Afouras, Lorenzo Torresani
- Abstract summary: We present an approach for localizing steps of procedural activities in narrated how-to videos.
We source the step descriptions from a language knowledge base (wikiHow) containing instructional articles.
Our model learns to temporally ground the steps of procedural articles in how-to videos by matching three modalities.
- Score: 50.3463147014498
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we present an approach for localizing steps of procedural
activities in narrated how-to videos. To deal with the scarcity of labeled data
at scale, we source the step descriptions from a language knowledge base
(wikiHow) containing instructional articles for a large variety of procedural
tasks. Without any form of manual supervision, our model learns to temporally
ground the steps of procedural articles in how-to videos by matching three
modalities: frames, narrations, and step descriptions. Specifically, our method
aligns steps to video by fusing information from two distinct pathways: i) {\em
direct} alignment of step descriptions to frames, ii) {\em indirect} alignment
obtained by composing steps-to-narrations with narrations-to-video
correspondences. Notably, our approach performs global temporal grounding of
all steps in an article at once by exploiting order information, and is trained
with step pseudo-labels which are iteratively refined and aggressively
filtered. In order to validate our model we introduce a new evaluation
benchmark -- HT-Step -- obtained by manually annotating a 124-hour subset of
HowTo100M\footnote{A test server is accessible at
\url{https://eval.ai/web/challenges/challenge-page/2082}.} with steps sourced
from wikiHow articles. Experiments on this benchmark as well as zero-shot
evaluations on CrossTask demonstrate that our multi-modality alignment yields
dramatic gains over several baselines and prior works. Finally, we show that
our inner module for matching narration-to-video outperforms by a large margin
the state of the art on the HTM-Align narration-video alignment benchmark.
Related papers
- Learning to Localize Actions in Instructional Videos with LLM-Based Multi-Pathway Text-Video Alignment [53.12952107996463]
This work proposes a novel training framework for learning to localize temporal boundaries of procedure steps in training videos.
Motivated by the strong capabilities of Large Language Models (LLMs) in procedure understanding and text summarization, we first apply an LLM to filter out task-irrelevant information and summarize task-related procedure steps from narrations.
To further generate reliable pseudo-matching between the LLM-steps and the video for training, we propose the Multi-Pathway Text-Video Alignment (MPTVA) strategy.
arXiv Detail & Related papers (2024-09-22T18:40:55Z) - Multi-Sentence Grounding for Long-term Instructional Video [63.27905419718045]
We aim to establish an automatic, scalable pipeline for denoising a large-scale instructional dataset.
We construct a high-quality video-text dataset with multiple descriptive steps supervision, named HowToStep.
arXiv Detail & Related papers (2023-12-21T17:28:09Z) - StepFormer: Self-supervised Step Discovery and Localization in
Instructional Videos [47.03252542488226]
We introduce StepFormer, a self-supervised model that discovers and localizes instruction steps in a video.
We train our system on a large dataset of instructional videos, using their automatically-generated subtitles as the only source of supervision.
Our model outperforms all previous unsupervised and weakly-supervised approaches on step detection and localization.
arXiv Detail & Related papers (2023-04-26T03:37:28Z) - Learning Procedure-aware Video Representation from Instructional Videos
and Their Narrations [22.723309913388196]
We learn video representation that encodes both action steps and their temporal ordering, based on a large-scale dataset of web instructional videos and their narrations.
Our method jointly learns a video representation to encode individual step concepts, and a deep probabilistic model to capture both temporal dependencies and immense individual variations in the step ordering.
arXiv Detail & Related papers (2023-03-31T07:02:26Z) - Learning and Verification of Task Structure in Instructional Videos [85.511888642497]
We introduce a new pre-trained video model, VideoTaskformer, focused on representing the semantics and structure of instructional videos.
Compared to prior work which learns step representations locally, our approach involves learning them globally.
We introduce two new benchmarks for detecting mistakes in instructional videos, to verify if there is an anomalous step and if steps are executed in the right order.
arXiv Detail & Related papers (2023-03-23T17:59:54Z) - TL;DW? Summarizing Instructional Videos with Task Relevance &
Cross-Modal Saliency [133.75876535332003]
We focus on summarizing instructional videos, an under-explored area of video summarization.
Existing video summarization datasets rely on manual frame-level annotations.
We propose an instructional video summarization network that combines a context-aware temporal video encoder and a segment scoring transformer.
arXiv Detail & Related papers (2022-08-14T04:07:40Z) - Learning To Recognize Procedural Activities with Distant Supervision [96.58436002052466]
We consider the problem of classifying fine-grained, multi-step activities from long videos spanning up to several minutes.
Our method uses a language model to match noisy, automatically-transcribed speech from the video to step descriptions in the knowledge base.
arXiv Detail & Related papers (2022-01-26T15:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.