Bringing Image Scene Structure to Video via Frame-Clip Consistency of
Object Tokens
- URL: http://arxiv.org/abs/2206.06346v2
- Date: Wed, 15 Jun 2022 16:22:36 GMT
- Title: Bringing Image Scene Structure to Video via Frame-Clip Consistency of
Object Tokens
- Authors: Elad Ben-Avraham, Roei Herzig, Karttikeya Mangalam, Amir Bar, Anna
Rohrbach, Leonid Karlinsky, Trevor Darrell, Amir Globerson
- Abstract summary: StructureViT shows how utilizing the structure of a small number of images only available during training can improve a video model.
SViT shows strong performance improvements on multiple video understanding tasks and datasets.
- Score: 93.98605636451806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent action recognition models have achieved impressive results by
integrating objects, their locations and interactions. However, obtaining dense
structured annotations for each frame is tedious and time-consuming, making
these methods expensive to train and less scalable. At the same time, if a
small set of annotated images is available, either within or outside the domain
of interest, how could we leverage these for a video downstream task? We
propose a learning framework StructureViT (SViT for short), which demonstrates
how utilizing the structure of a small number of images only available during
training can improve a video model. SViT relies on two key insights. First, as
both images and videos contain structured information, we enrich a transformer
model with a set of \emph{object tokens} that can be used across images and
videos. Second, the scene representations of individual frames in video should
"align" with those of still images. This is achieved via a \emph{Frame-Clip
Consistency} loss, which ensures the flow of structured information between
images and videos. We explore a particular instantiation of scene structure,
namely a \emph{Hand-Object Graph}, consisting of hands and objects with their
locations as nodes, and physical relations of contact/no-contact as edges. SViT
shows strong performance improvements on multiple video understanding tasks and
datasets. Furthermore, it won in the Ego4D CVPR'22 Object State Localization
challenge. For code and pretrained models, visit the project page at
\url{https://eladb3.github.io/SViT/}
Related papers
- Multi-entity Video Transformers for Fine-Grained Video Representation
Learning [36.31020249963468]
We re-examine the design of transformer architectures for video representation learning.
A salient aspect of our self-supervised method is the improved integration of spatial information in the temporal pipeline.
Our Multi-entity Video Transformer (MV-Former) architecture achieves state-of-the-art results on multiple fine-grained video benchmarks.
arXiv Detail & Related papers (2023-11-17T21:23:12Z) - UnLoc: A Unified Framework for Video Localization Tasks [82.59118972890262]
UnLoc is a new approach for temporal localization in untrimmed videos.
It uses pretrained image and text towers, and feeds tokens to a video-text fusion model.
We achieve state of the art results on all three different localization tasks with a unified approach.
arXiv Detail & Related papers (2023-08-21T22:15:20Z) - Fine-tuned CLIP Models are Efficient Video Learners [54.96069171726668]
Large-scale multi-modal training with image-text pairs imparts strong generalization to CLIP model.
Video Fine-tuned CLIP (ViFi-CLIP) baseline is generally sufficient to bridge the domain gap from images to videos.
arXiv Detail & Related papers (2022-12-06T18:59:58Z) - Structured Video Tokens @ Ego4D PNR Temporal Localization Challenge 2022 [93.98605636451806]
This report describes the SViT approach for the Ego4D Point of No Return (PNR) Temporal Localization Challenge.
We propose a learning framework which demonstrates how utilizing the structure of a small number of images only available during training can improve a video model.
SViT obtains strong performance on the challenge test set with 0.656 absolute temporal localization error.
arXiv Detail & Related papers (2022-06-15T17:36:38Z) - HODOR: High-level Object Descriptors for Object Re-segmentation in Video
Learned from Static Images [123.65233334380251]
We propose HODOR: a novel method that effectively leveraging annotated static images for understanding object appearance and scene context.
As a result, HODOR achieves state-of-the-art performance on the DAVIS and YouTube-VOS benchmarks.
Without any architectural modification, HODOR can also learn from video context around single annotated video frames.
arXiv Detail & Related papers (2021-12-16T18:59:53Z) - Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval [80.7397409377659]
We propose an end-to-end trainable model that is designed to take advantage of both large-scale image and video captioning datasets.
Our model is flexible and can be trained on both image and video text datasets, either independently or in conjunction.
We show that this approach yields state-of-the-art results on standard downstream video-retrieval benchmarks.
arXiv Detail & Related papers (2021-04-01T17:48:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.