A motion-based compression algorithm for resource-constrained video camera traps
- URL: http://arxiv.org/abs/2405.14419v3
- Date: Wed, 02 Oct 2024 00:07:34 GMT
- Title: A motion-based compression algorithm for resource-constrained video camera traps
- Authors: Malika Nisal Ratnayake, Lex Gallon, Adel N. Toosi, Alan Dorin,
- Abstract summary: We introduce a new motion analysis-based video compression algorithm specifically designed for camera traps.
The algorithm identifies and stores only image regions depicting motion relevant to motion pollination monitoring.
Our experiments demonstrate the algorithm's capability to preserve critical information for insect behaviour analysis.
- Score: 4.349838917565205
- License:
- Abstract: Field-captured video facilitates detailed studies of spatio-temporal aspects of animal locomotion, decision-making and environmental interactions including predator-prey relationships and habitat utilisation. But even though data capture is cheap with mass-produced hardware, storage, processing and transmission overheads provide a hurdle to acquisition of high resolution video from field-situated edge computing devices. Efficient compression algorithms are therefore essential if monitoring is to be conducted on single-board computers in situations where such hurdles must be overcome. Animal motion tracking in the field has unique characteristics that necessitate the use of novel video compression techniques, which may be underexplored or unsuitable in other contexts. In this article, we therefore introduce a new motion analysis-based video compression algorithm specifically designed for camera traps. We implemented and tested this algorithm using a case study of insect-pollinator motion tracking on three popular edge computing platforms. The algorithm identifies and stores only image regions depicting motion relevant to pollination monitoring, reducing overall data size by an average of 87% across diverse test datasets. Our experiments demonstrate the algorithm's capability to preserve critical information for insect behaviour analysis through both manual observation and automatic analysis of the compressed footage. The method presented in this paper enhances the applicability of low-powered computer vision edge devices to remote, in situ animal motion monitoring, and improves the efficiency of playback during behavioural analyses. Our new software, EcoMotionZip, is available Open Access.
Related papers
- Practical Video Object Detection via Feature Selection and Aggregation [18.15061460125668]
Video object detection (VOD) needs to concern the high across-frame variation in object appearance, and the diverse deterioration in some frames.
Most of contemporary aggregation methods are tailored for two-stage detectors, suffering from high computational costs.
This study invents a very simple yet potent strategy of feature selection and aggregation, gaining significant accuracy at marginal computational expense.
arXiv Detail & Related papers (2024-07-29T02:12:11Z) - Video Segmentation Learning Using Cascade Residual Convolutional Neural
Network [0.0]
We propose a novel deep learning video segmentation approach that incorporates residual information into the foreground detection learning process.
Experiments conducted on the Change Detection 2014 and on the private dataset PetrobrasROUTES from Petrobras support the effectiveness of the proposed approach.
arXiv Detail & Related papers (2022-12-20T16:56:54Z) - Speeding Up Action Recognition Using Dynamic Accumulation of Residuals
in Compressed Domain [2.062593640149623]
Temporal redundancy and the sheer size of raw videos are the two most common problematic issues related to video processing algorithms.
This paper presents an approach for using residual data, available in compressed videos directly, which can be obtained by a light partially decoding procedure.
Applying neural networks exclusively for accumulated residuals in the compressed domain accelerates performance, while the classification results are highly competitive with raw video approaches.
arXiv Detail & Related papers (2022-09-29T13:08:49Z) - Differentiable Frequency-based Disentanglement for Aerial Video Action
Recognition [56.91538445510214]
We present a learning algorithm for human activity recognition in videos.
Our approach is designed for UAV videos, which are mainly acquired from obliquely placed dynamic cameras.
We conduct extensive experiments on the UAV Human dataset and the NEC Drone dataset.
arXiv Detail & Related papers (2022-09-15T22:16:52Z) - Implicit Motion Handling for Video Camouflaged Object Detection [60.98467179649398]
We propose a new video camouflaged object detection (VCOD) framework.
It can exploit both short-term and long-term temporal consistency to detect camouflaged objects from video frames.
arXiv Detail & Related papers (2022-03-14T17:55:41Z) - Argus++: Robust Real-time Activity Detection for Unconstrained Video
Streams with Overlapping Cube Proposals [85.76513755331318]
Argus++ is a robust real-time activity detection system for analyzing unconstrained video streams.
The overall system is optimized for real-time processing on standalone consumer-level hardware.
arXiv Detail & Related papers (2022-01-14T03:35:22Z) - Video Salient Object Detection via Contrastive Features and Attention
Modules [106.33219760012048]
We propose a network with attention modules to learn contrastive features for video salient object detection.
A co-attention formulation is utilized to combine the low-level and high-level features.
We show that the proposed method requires less computation, and performs favorably against the state-of-the-art approaches.
arXiv Detail & Related papers (2021-11-03T17:40:32Z) - Render In-between: Motion Guided Video Synthesis for Action
Interpolation [53.43607872972194]
We propose a motion-guided frame-upsampling framework that is capable of producing realistic human motion and appearance.
A novel motion model is trained to inference the non-linear skeletal motion between frames by leveraging a large-scale motion-capture dataset.
Our pipeline only requires low-frame-rate videos and unpaired human motion data but does not require high-frame-rate videos for training.
arXiv Detail & Related papers (2021-11-01T15:32:51Z) - rSVDdpd: A Robust Scalable Video Surveillance Background Modelling
Algorithm [13.535770763481905]
We present a new video surveillance background modelling algorithm based on a new robust singular value decomposition technique rSVDdpd.
We also demonstrate the superiority of our proposed algorithm on a benchmark dataset and a new real-life video surveillance dataset in the presence of camera tampering.
arXiv Detail & Related papers (2021-09-22T12:20:44Z) - Robust Privacy-Preserving Motion Detection and Object Tracking in
Encrypted Streaming Video [39.453548972987015]
We propose an efficient and robust privacy-preserving motion detection and multiple object tracking scheme for encrypted surveillance video bitstreams.
Our scheme achieves the best detection and tracking performance compared with existing works in the encrypted and compressed domain.
Our scheme can be effectively used in complex surveillance scenarios with different challenges, such as camera movement/jitter, dynamic background, and shadows.
arXiv Detail & Related papers (2021-08-30T11:58:19Z) - Self-Conditioned Probabilistic Learning of Video Rescaling [70.10092286301997]
We propose a self-conditioned probabilistic framework for video rescaling to learn the paired downscaling and upscaling procedures simultaneously.
We decrease the entropy of the information lost in the downscaling by maximizing its conditioned probability on the strong spatial-temporal prior information.
We extend the framework to a lossy video compression system, in which a gradient estimator for non-differential industrial lossy codecs is proposed.
arXiv Detail & Related papers (2021-07-24T15:57:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.