SOVC: Subject-Oriented Video Captioning
- URL: http://arxiv.org/abs/2312.13330v2
- Date: Mon, 9 Sep 2024 10:42:58 GMT
- Title: SOVC: Subject-Oriented Video Captioning
- Authors: Chang Teng, Yunchuan Ma, Guorong Li, Yuankai Qi, Laiyu Qing, Qingming Huang,
- Abstract summary: We propose a new video captioning task, Subject-Oriented Video Captioning (SOVC), which aims to allow users to specify the describing target via a bounding box.
To support this task, we construct two subject-oriented video captioning datasets based on two widely used video captioning datasets.
- Score: 59.04029220586337
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Describing video content according to users' needs is a long-held goal. Although existing video captioning methods have made significant progress, the generated captions may not focus on the entity that users are particularly interested in. To address this problem, we propose a new video captioning task, Subject-Oriented Video Captioning (SOVC), which aims to allow users to specify the describing target via a bounding box. To support this task, we construct two subject-oriented video captioning datasets based on two widely used video captioning datasets: MSVD and MSRVTT, by annotating subjects in each video for each caption. These datasets pave the way for describing users' interested targets. To tackle this task, we introduce a method tailored to this task, named SOVCNet. It consists of two key components: a subject-oriented sampling module that samples frames related to the subject to minimize irrelevant information; and a subject-oriented encoding module that utilizes the subject areas as hard prompts and integrates learnable soft prompts, enhancing the model's focus on the subject's activities and facilitating adaptation to the downstream generation task. Extensive experimental results demonstrate the effectiveness of our method on this new task.
Related papers
- Grounded Video Caption Generation [74.23767687855279]
We propose a new task, dataset and model for grounded video caption generation.
This task unifies captioning and object grounding in video, where the objects in the caption are grounded in the video via temporally consistent bounding boxes.
We introduce a new grounded video caption generation model, called VideoGround, and train the model on the new automatically annotated HowToGround dataset.
arXiv Detail & Related papers (2024-11-12T06:44:24Z) - Video Enriched Retrieval Augmented Generation Using Aligned Video Captions [1.0878040851638]
"aligned visual captions" describe the visual and audio content of videos in a large corpus.
Visual captions can be adapted to specific use cases by prompting the original foundational model / captioner for particular visual details or fine tuning.
arXiv Detail & Related papers (2024-05-27T23:39:17Z) - Learning text-to-video retrieval from image captioning [59.81537951811595]
We describe a protocol to study text-to-video retrieval training with unlabeled videos.
We assume (i) no access to labels for any videos, and (ii) access to labeled images in the form of text.
We show that automatically labeling video frames with image captioning allows text-to-video retrieval training.
arXiv Detail & Related papers (2024-04-26T15:56:08Z) - Video Summarization: Towards Entity-Aware Captions [73.28063602552741]
We propose the task of summarizing news video directly to entity-aware captions.
We show that our approach generalizes to existing news image captions dataset.
arXiv Detail & Related papers (2023-12-01T23:56:00Z) - MeViS: A Large-scale Benchmark for Video Segmentation with Motion
Expressions [93.35942025232943]
We propose a large-scale dataset called MeViS, which contains numerous motion expressions to indicate target objects in complex environments.
The goal of our benchmark is to provide a platform that enables the development of effective language-guided video segmentation algorithms.
arXiv Detail & Related papers (2023-08-16T17:58:34Z) - Video Object of Interest Segmentation [27.225312139360963]
We present a new computer vision task named video object of interest segmentation (VOIS)
Given a video and a target image of interest, our objective is to simultaneously segment and track all objects in the video that are relevant to the target image.
Since no existing dataset is perfectly suitable for this new task, we specifically construct a large-scale dataset called LiveVideos.
arXiv Detail & Related papers (2022-12-06T10:21:10Z) - Visual Subtitle Feature Enhanced Video Outline Generation [23.831220964676973]
We introduce a novel video understanding task, namely video outline generation (VOG)
To learn and evaluate VOG, we annotate a 10k+ dataset, called DuVOG.
We propose a Visual Subtitle feature Enhanced video outline generation model (VSENet)
arXiv Detail & Related papers (2022-08-24T05:26:26Z) - O2NA: An Object-Oriented Non-Autoregressive Approach for Controllable
Video Captioning [41.14313691818424]
We propose an Object-Oriented Non-Autoregressive approach (O2NA) for video captioning.
O2NA performs caption generation in three steps: 1) identify the focused objects and predict their locations in the target caption; 2) generate the related attribute words and relation words of these focused objects to form a draft caption; and 3) combine video information to refine the draft caption to a fluent final caption.
Experiments on two benchmark datasets, MSR-VTT and MSVD, demonstrate the effectiveness of O2NA.
arXiv Detail & Related papers (2021-08-05T04:17:20Z) - Fine-grained Iterative Attention Network for TemporalLanguage
Localization in Videos [63.94898634140878]
Temporal language localization in videos aims to ground one video segment in an untrimmed video based on a given sentence query.
We propose a Fine-grained Iterative Attention Network (FIAN) that consists of an iterative attention module for bilateral query-video in-formation extraction.
We evaluate the proposed method on three challenging public benchmarks: Ac-tivityNet Captions, TACoS, and Charades-STA.
arXiv Detail & Related papers (2020-08-06T04:09:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.