Moving Object Based Collision-Free Video Synopsis
- URL: http://arxiv.org/abs/2401.02419v1
- Date: Sun, 17 Sep 2023 16:49:42 GMT
- Title: Moving Object Based Collision-Free Video Synopsis
- Authors: Anton Jeran Ratnarajah, Sahani Goonetilleke, Dumindu Tissera, Kapilan
Balagopalan, Ranga Rodrigo
- Abstract summary: Video synopsis generates a shorter video by exploiting the spatial and temporal redundancies.
We propose a real-time algorithm by using a method that incrementally stitches each frame of the synopsis.
Experiments with six common test videos, indoors and outdoors, show that the proposed video synopsis algorithm produces better frame reduction rates than existing approaches.
- Score: 1.55172825097051
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Video synopsis, summarizing a video to generate a shorter video by exploiting
the spatial and temporal redundancies, is important for surveillance and
archiving. Existing trajectory-based video synopsis algorithms will not able to
work in real time, because of the complexity due to the number of object tubes
that need to be included in the complex energy minimization algorithm. We
propose a real-time algorithm by using a method that incrementally stitches
each frame of the synopsis by extracting object frames from the user specified
number of tubes in the buffer in contrast to global energy-minimization based
systems. This also gives flexibility to the user to set the threshold of
maximum number of objects in the synopsis video according his or her tracking
ability and creates collision-free summarized videos which are visually
pleasing. Experiments with six common test videos, indoors and outdoors with
many moving objects, show that the proposed video synopsis algorithm produces
better frame reduction rates than existing approaches.
Related papers
- A Low-Computational Video Synopsis Framework with a Standard Dataset [0.0]
Video synopsis is an efficient method for condensing surveillance videos.
The lack of a standard dataset for the video synopsis task hinders the comparison of different video synopsis models.
This paper introduces a video synopsis model, called FGS, with low computational cost.
arXiv Detail & Related papers (2024-09-08T22:08:36Z) - Rethinking Image-to-Video Adaptation: An Object-centric Perspective [61.833533295978484]
We propose a novel and efficient image-to-video adaptation strategy from the object-centric perspective.
Inspired by human perception, we integrate a proxy task of object discovery into image-to-video transfer learning.
arXiv Detail & Related papers (2024-07-09T13:58:10Z) - Spatio-temporal Prompting Network for Robust Video Feature Extraction [74.54597668310707]
Frametemporal is one of the main challenges in the field of video understanding.
Recent approaches exploit transformer-based integration modules to obtain quality-of-temporal information.
We present a neat and unified framework called N-Temporal Prompting Network (NNSTP)
It can efficiently extract video features by adjusting the input features in the network backbone.
arXiv Detail & Related papers (2024-02-04T17:52:04Z) - DynPoint: Dynamic Neural Point For View Synthesis [45.44096876841621]
We propose DynPoint, an algorithm designed to facilitate the rapid synthesis of novel views for unconstrained monocular videos.
DynPoint concentrates on predicting the explicit 3D correspondence between neighboring frames to realize information aggregation.
Our method exhibits strong robustness in handling long-duration videos without learning a canonical representation of video content.
arXiv Detail & Related papers (2023-10-29T12:55:53Z) - TL;DW? Summarizing Instructional Videos with Task Relevance &
Cross-Modal Saliency [133.75876535332003]
We focus on summarizing instructional videos, an under-explored area of video summarization.
Existing video summarization datasets rely on manual frame-level annotations.
We propose an instructional video summarization network that combines a context-aware temporal video encoder and a segment scoring transformer.
arXiv Detail & Related papers (2022-08-14T04:07:40Z) - Video Salient Object Detection via Contrastive Features and Attention
Modules [106.33219760012048]
We propose a network with attention modules to learn contrastive features for video salient object detection.
A co-attention formulation is utilized to combine the low-level and high-level features.
We show that the proposed method requires less computation, and performs favorably against the state-of-the-art approaches.
arXiv Detail & Related papers (2021-11-03T17:40:32Z) - A Generic Object Re-identification System for Short Videos [39.662850217144964]
A Temporal Information Fusion Network (TIFN) is proposed in the object detection module.
A Cross-Layer Pointwise Siamese Network (CPSN) is proposed in the tracking module to enhance the robustness of the appearance model.
Two challenge datasets containing real-world short videos are built for video object trajectory extraction and generic object re-identification.
arXiv Detail & Related papers (2021-02-10T05:45:09Z) - Generating Masks from Boxes by Mining Spatio-Temporal Consistencies in
Videos [159.02703673838639]
We introduce a method for generating segmentation masks from per-frame bounding box annotations in videos.
We use our resulting accurate masks for weakly supervised training of video object segmentation (VOS) networks.
The additional data provides substantially better generalization performance leading to state-of-the-art results in both the VOS and more challenging tracking domain.
arXiv Detail & Related papers (2021-01-06T18:56:24Z) - An Efficient Recurrent Adversarial Framework for Unsupervised Real-Time
Video Enhancement [132.60976158877608]
We propose an efficient adversarial video enhancement framework that learns directly from unpaired video examples.
In particular, our framework introduces new recurrent cells that consist of interleaved local and global modules for implicit integration of spatial and temporal information.
The proposed design allows our recurrent cells to efficiently propagate-temporal-information across frames and reduces the need for high complexity networks.
arXiv Detail & Related papers (2020-12-24T00:03:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.