Multi Sentence Description of Complex Manipulation Action Videos
- URL: http://arxiv.org/abs/2311.07285v1
- Date: Mon, 13 Nov 2023 12:27:06 GMT
- Title: Multi Sentence Description of Complex Manipulation Action Videos
- Authors: Fatemeh Ziaeetabar, Reza Safabakhsh, Saeedeh Momtazi, Minija
Tamosiunaite and Florentin W\"org\"otter
- Abstract summary: Existing approaches for automatic video descriptions are mostly focused on single sentence generation at a fixed level of detail.
We propose one hybrid statistical and one end-to-end framework to address this problem.
- Score: 3.7486111821201287
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic video description requires the generation of natural language
statements about the actions, events, and objects in the video. An important
human trait, when we describe a video, is that we are able to do this with
variable levels of detail. Different from this, existing approaches for
automatic video descriptions are mostly focused on single sentence generation
at a fixed level of detail. Instead, here we address video description of
manipulation actions where different levels of detail are required for being
able to convey information about the hierarchical structure of these actions
relevant also for modern approaches of robot learning. We propose one hybrid
statistical and one end-to-end framework to address this problem. The hybrid
method needs much less data for training, because it models statistically
uncertainties within the video clips, while in the end-to-end method, which is
more data-heavy, we are directly connecting the visual encoder to the language
decoder without any intermediate (statistical) processing step. Both frameworks
use LSTM stacks to allow for different levels of description granularity and
videos can be described by simple single-sentences or complex multiple-sentence
descriptions. In addition, quantitative results demonstrate that these methods
produce more realistic descriptions than other competing approaches.
Related papers
- Enhancing Multi-Modal Video Sentiment Classification Through Semi-Supervised Clustering [0.0]
We aim to improve video sentiment classification by focusing on two key aspects: the video itself, the accompanying text, and the acoustic features.
We are developing a method that utilizes clustering-based semi-supervised pre-training to extract meaningful representations from the data.
arXiv Detail & Related papers (2025-01-11T08:04:39Z) - Whats in a Video: Factorized Autoregressive Decoding for Online Dense Video Captioning [71.94122309290537]
We propose an efficient, online approach to generate dense captions for videos.
Our model uses a novel autoregressive factorized decoding architecture.
Our approach shows excellent performance compared to both offline and online methods, and uses 20% less compute.
arXiv Detail & Related papers (2024-11-22T02:46:44Z) - Storyboard guided Alignment for Fine-grained Video Action Recognition [32.02631248389487]
Fine-grained video action recognition can be conceptualized as a video-text matching problem.
We propose a multi-granularity framework based on two observations: (i) videos with different global semantics may share similar atomic actions or appearances, and (ii) atomic actions within a video can be momentary, slow, or even non-directly related to the global video semantics.
arXiv Detail & Related papers (2024-10-18T07:40:41Z) - Artemis: Towards Referential Understanding in Complex Videos [61.756640718014154]
We present Artemis, an MLLM that pushes video-based referential understanding to a finer level.
Artemis receives a natural-language question with a bounding box in any video frame and describes the referred target in the entire video.
We train Artemis on the newly established VideoRef45K dataset with 45K video-QA pairs and design a computationally efficient, three-stage training procedure.
arXiv Detail & Related papers (2024-06-01T01:43:56Z) - SPOT! Revisiting Video-Language Models for Event Understanding [31.49859545456809]
We introduce SPOT Prober, to benchmark existing video-language models's capacities of distinguishing event-level discrepancies.
We evaluate the existing video-language models with these positive and negative captions and find they fail to distinguish most of the manipulated events.
Based on our findings, we propose to plug in these manipulated event captions as hard negative samples and find them effective in enhancing models for event understanding.
arXiv Detail & Related papers (2023-11-21T18:43:07Z) - HierVL: Learning Hierarchical Video-Language Embeddings [108.77600799637172]
HierVL is a novel hierarchical video-language embedding that simultaneously accounts for both long-term and short-term associations.
We introduce a hierarchical contrastive training objective that encourages text-visual alignment at both the clip level and video level.
Our hierarchical scheme yields a clip representation that outperforms its single-level counterpart as well as a long-term video representation that achieves SotA.
arXiv Detail & Related papers (2023-01-05T21:53:19Z) - Towards Fast Adaptation of Pretrained Contrastive Models for
Multi-channel Video-Language Retrieval [70.30052749168013]
Multi-channel video-language retrieval require models to understand information from different channels.
contrastive multimodal models are shown to be highly effective at aligning entities in images/videos and text.
There is not a clear way to quickly adapt these two lines to multi-channel video-language retrieval with limited data and resources.
arXiv Detail & Related papers (2022-06-05T01:43:52Z) - All in One: Exploring Unified Video-Language Pre-training [44.22059872694995]
We introduce an end-to-end video-language model, namely textitall-in-one Transformer, that embeds raw video and textual signals into joint representations.
The code and pretrained model have been released in https://github.com/showlab/all-in-one.
arXiv Detail & Related papers (2022-03-14T17:06:30Z) - Show Me What and Tell Me How: Video Synthesis via Multimodal
Conditioning [36.85533835408882]
This work presents a multimodal video generation framework that benefits from text and images provided jointly or separately.
We propose a new video token trained with self-learning and an improved mask-prediction algorithm for sampling video tokens.
Our framework can incorporate various visual modalities, such as segmentation masks, drawings, and partially occluded images.
arXiv Detail & Related papers (2022-03-04T21:09:13Z) - Align and Prompt: Video-and-Language Pre-training with Entity Prompts [111.23364631136339]
Video-and-language pre-training has shown promising improvements on various downstream tasks.
We propose Align and Prompt: an efficient and effective video-and-language pre-training framework with better cross-modal alignment.
Our code and pre-trained models will be released.
arXiv Detail & Related papers (2021-12-17T15:55:53Z) - Spoken Moments: Learning Joint Audio-Visual Representations from Video
Descriptions [75.77044856100349]
We present the Spoken Moments dataset of 500k spoken captions each attributed to a unique short video depicting a broad range of different events.
We show that our AMM approach consistently improves our results and that models trained on our Spoken Moments dataset generalize better than those trained on other video-caption datasets.
arXiv Detail & Related papers (2021-05-10T16:30:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.