Video-Data Pipelines for Machine Learning Applications
- URL: http://arxiv.org/abs/2110.11407v1
- Date: Fri, 15 Oct 2021 20:28:56 GMT
- Title: Video-Data Pipelines for Machine Learning Applications
- Authors: Sohini Roychowdhury, James Y. Sato
- Abstract summary: The proposed framework can be scaled to additional video-sequence data sets for ML versioned deployments.
We analyze the performance of the proposed video-data pipeline for versioned deployment and monitoring for object detection algorithms.
- Score: 0.9594432031144714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data pipelines are an essential component for end-to-end solutions that take
machine learning algorithms to production. Engineering data pipelines for
video-sequences poses several challenges including isolation of key-frames from
video sequences that are high quality and represent significant variations in
the scene. Manual isolation of such quality key-frames can take hours of
sifting through hours worth of video data. In this work, we present a data
pipeline framework that can automate this process of manual frame sifting in
video sequences by controlling the fraction of frames that can be removed based
on image quality and content type. Additionally, the frames that are retained
can be automatically tagged per sequence, thereby simplifying the process of
automated data retrieval for future ML model deployments. We analyze the
performance of the proposed video-data pipeline for versioned deployment and
monitoring for object detection algorithms that are trained on outdoor
autonomous driving video sequences. The proposed video-data pipeline can retain
anywhere between 0.1-20% of the all input frames that are representative of
high image quality and high variations in content. This frame selection,
automated scene tagging followed by model verification can be completed in
under 30 seconds for 22 video-sequences under analysis in this work. Thus, the
proposed framework can be scaled to additional video-sequence data sets for
automating ML versioned deployments.
Related papers
- Video Instruction Tuning With Synthetic Data [84.64519990333406]
We create a high-quality synthetic dataset specifically for video instruction-following, namely LLaVA-Video-178K.
This dataset includes key tasks such as detailed captioning, open-ended question-answering (QA), and multiple-choice QA.
By training on this dataset, in combination with existing visual instruction tuning data, we introduce LLaVA-Video, a new video LMM.
arXiv Detail & Related papers (2024-10-03T17:36:49Z) - xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations [120.52120919834988]
xGen-SynVideo-1 is a text-to-video (T2V) generation model capable of producing realistic scenes from textual descriptions.
VidVAE compresses video data both spatially and temporally, significantly reducing the length of visual tokens.
DiT model incorporates spatial and temporal self-attention layers, enabling robust generalization across different timeframes and aspect ratios.
arXiv Detail & Related papers (2024-08-22T17:55:22Z) - LAVIB: A Large-scale Video Interpolation Benchmark [58.194606275650095]
LAVIB comprises a large collection of high-resolution videos sourced from the web through an automated pipeline.
Metrics are computed for each video's motion magnitudes, luminance conditions, frame sharpness, and contrast.
In total, LAVIB includes 283K clips from 17K ultra-HD videos, covering 77.6 hours.
arXiv Detail & Related papers (2024-06-14T06:44:01Z) - Streaming Video Model [90.24390609039335]
We propose to unify video understanding tasks into one streaming video architecture, referred to as Streaming Vision Transformer (S-ViT)
S-ViT first produces frame-level features with a memory-enabled temporally-aware spatial encoder to serve frame-based video tasks.
The efficiency and efficacy of S-ViT is demonstrated by the state-of-the-art accuracy in the sequence-based action recognition.
arXiv Detail & Related papers (2023-03-30T08:51:49Z) - Flexible Diffusion Modeling of Long Videos [15.220686350342385]
We introduce a generative model that can at test-time sample any arbitrary subset of video frames conditioned on any other subset.
We demonstrate improved video modeling over prior work on a number of datasets and sample temporally coherent videos over 25 minutes in length.
We additionally release a new video modeling dataset and semantically meaningful metrics based on videos generated in the CARLA self-driving car simulator.
arXiv Detail & Related papers (2022-05-23T17:51:48Z) - Semi-supervised and Deep learning Frameworks for Video Classification
and Key-frame Identification [1.2335698325757494]
We present two semi-supervised approaches that automatically classify scenes for content and filter frames for scene understanding tasks.
The proposed framework can be scaled to additional video data streams for automated training of perception-driven systems.
arXiv Detail & Related papers (2022-03-25T05:45:18Z) - End-to-End Video Instance Segmentation with Transformers [84.17794705045333]
Video instance segmentation (VIS) is the task that requires simultaneously classifying, segmenting and tracking object instances of interest in video.
Here, we propose a new video instance segmentation framework built upon Transformers, termed VisTR, which views the VIS task as a direct end-to-end parallel sequence decoding/prediction problem.
For the first time, we demonstrate a much simpler and faster video instance segmentation framework built upon Transformers, achieving competitive accuracy.
arXiv Detail & Related papers (2020-11-30T02:03:50Z) - Temporal Context Aggregation for Video Retrieval with Contrastive
Learning [81.12514007044456]
We propose TCA, a video representation learning framework that incorporates long-range temporal information between frame-level features.
The proposed method shows a significant performance advantage (17% mAP on FIVR-200K) over state-of-the-art methods with video-level features.
arXiv Detail & Related papers (2020-08-04T05:24:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.