STAC: Leveraging Spatio-Temporal Data Associations For Efficient
Cross-Camera Streaming and Analytics
- URL: http://arxiv.org/abs/2401.15288v1
- Date: Sat, 27 Jan 2024 04:02:52 GMT
- Title: STAC: Leveraging Spatio-Temporal Data Associations For Efficient
Cross-Camera Streaming and Analytics
- Authors: Volodymyr Vakhniuk, Ayush Sarkar, Ragini Gupta
- Abstract summary: We propose an efficient cross-cameras surveillance system that provides real-time analytics and inference under constrained network environments.
We integrate STAC with frame filtering and state-of-the-art compression for streaming characteristics.
We evaluate the performance of STA using this dataset to measure the accuracy metrics and inference rate for completenessid.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an efficient cross-cameras surveillance system called,STAC, that
leverages spatio-temporal associations between multiple cameras to provide
real-time analytics and inference under constrained network environments. STAC
is built using the proposed omni-scale feature learning people reidentification
(reid) algorithm that allows accurate detection, tracking and re-identification
of people across cameras using the spatio-temporal characteristics of video
frames. We integrate STAC with frame filtering and state-of-the-art compression
for streaming technique (that is, ffmpeg libx264 codec) to remove redundant
information from cross-camera frames. This helps in optimizing the cost of
video transmission as well as compute/processing, while maintaining high
accuracy for real-time query inference. The introduction of AICity Challenge
2023 Data [1] by NVIDIA has allowed exploration of systems utilizing
multi-camera people tracking algorithms. We evaluate the performance of STAC
using this dataset to measure the accuracy metrics and inference rate for reid.
Additionally, we quantify the reduction in video streams achieved through frame
filtering and compression using FFmpeg compared to the raw camera streams. For
completeness, we make available our repository to reproduce the results,
available at https://github.com/VolodymyrVakhniuk/CS444_Final_Project.
Related papers
- Accelerated Event-Based Feature Detection and Compression for
Surveillance Video Systems [1.5390526524075634]
We propose a novel system which conveys temporal redundancy within a sparse decompressed representation.
We leverage a video representation framework called ADDER to transcode framed videos to sparse, asynchronous intensity samples.
Our work paves the way for upcoming neuromorphic sensors and is amenable to future applications with spiking neural networks.
arXiv Detail & Related papers (2023-12-13T15:30:29Z) - Learn to Compress (LtC): Efficient Learning-based Streaming Video
Analytics [3.2872586139884623]
LtC is a collaborative framework between the video source and the analytics server that efficiently learns to reduce the video streams within an analytics pipeline.
LtC is able to use 28-35% less bandwidth and has up to 45% shorter response delay compared to recently published state of the art streaming frameworks.
arXiv Detail & Related papers (2023-07-22T21:36:03Z) - Spatiotemporal Attention-based Semantic Compression for Real-time Video
Recognition [117.98023585449808]
We propose a temporal attention-based autoencoder (STAE) architecture to evaluate the importance of frames and pixels in each frame.
We develop a lightweight decoder that leverages a 3D-2D CNN combined to reconstruct missing information.
Experimental results show that ViT_STAE can compress the video dataset H51 by 104x with only 5% accuracy loss.
arXiv Detail & Related papers (2023-05-22T07:47:27Z) - GPU-accelerated SIFT-aided source identification of stabilized videos [63.084540168532065]
We exploit the parallelization capabilities of Graphics Processing Units (GPUs) in the framework of stabilised frames inversion.
We propose to exploit SIFT features.
to estimate the camera momentum and %to identify less stabilized temporal segments.
Experiments confirm the effectiveness of the proposed approach in reducing the required computational time and improving the source identification accuracy.
arXiv Detail & Related papers (2022-07-29T07:01:31Z) - FrameHopper: Selective Processing of Video Frames in Detection-driven
Real-Time Video Analytics [2.5119455331413376]
Detection-driven real-time video analytics require continuous detection of objects contained in the video frames.
Running these detectors on each and every frame in resource-constrained edge devices is computationally intensive.
We propose an off-line Reinforcement Learning (RL)-based algorithm to determine these skip-lengths.
arXiv Detail & Related papers (2022-03-22T07:05:57Z) - Implicit Motion Handling for Video Camouflaged Object Detection [60.98467179649398]
We propose a new video camouflaged object detection (VCOD) framework.
It can exploit both short-term and long-term temporal consistency to detect camouflaged objects from video frames.
arXiv Detail & Related papers (2022-03-14T17:55:41Z) - Personal Privacy Protection via Irrelevant Faces Tracking and Pixelation
in Video Live Streaming [61.145467627057194]
We develop a new method called Face Pixelation in Video Live Streaming to generate automatic personal privacy filtering.
For fast and accurate pixelation of irrelevant people's faces, FPVLS is organized in a frame-to-video structure of two core stages.
On the video live streaming dataset we collected, FPVLS obtains satisfying accuracy, real-time efficiency, and contains the over-pixelation problems.
arXiv Detail & Related papers (2021-01-04T16:18:26Z) - Temporal Context Aggregation for Video Retrieval with Contrastive
Learning [81.12514007044456]
We propose TCA, a video representation learning framework that incorporates long-range temporal information between frame-level features.
The proposed method shows a significant performance advantage (17% mAP on FIVR-200K) over state-of-the-art methods with video-level features.
arXiv Detail & Related papers (2020-08-04T05:24:20Z) - Single Shot Video Object Detector [215.06904478667337]
Single Shot Video Object Detector (SSVD) is a new architecture that novelly integrates feature aggregation into a one-stage detector for object detection in videos.
For $448 times 448$ input, SSVD achieves 79.2% mAP on ImageNet VID dataset.
arXiv Detail & Related papers (2020-07-07T15:36:26Z) - CONVINCE: Collaborative Cross-Camera Video Analytics at the Edge [1.5469452301122173]
This paper introduces CONVINCE, a new approach to look at cameras as a collective entity that enables collaborative video analytics pipeline among cameras.
Our results demonstrate that CONVINCE achieves an object identification accuracy of $sim$91%, by transmitting only about $sim$25% of all the recorded frames.
arXiv Detail & Related papers (2020-02-05T23:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.