Motion-based video compression for resource-constrained camera traps
- URL: http://arxiv.org/abs/2405.14419v2
- Date: Thu, 13 Jun 2024 23:48:05 GMT
- Title: Motion-based video compression for resource-constrained camera traps
- Authors: Malika Nisal Ratnayake, Lex Gallon, Adel N. Toosi, Alan Dorin,
- Abstract summary: We introduce a new motion analysis-based video compression algorithm designed to run on camera trap devices.
We implemented and tested this algorithm using a case study of insect-pollinator motion tracking.
The methods outlined in this paper facilitate the broader application of computer vision-enabled, low-powered camera trap devices for remote, in-situ video-based animal motion monitoring.
- Score: 4.349838917565205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Field-captured video allows for detailed studies of spatiotemporal aspects of animal locomotion, decision-making, and environmental interactions. However, despite the affordability of data capture with mass-produced hardware, storage, processing, and transmission overheads pose a significant hurdle to acquiring high-resolution video from field-deployed camera traps. Therefore, efficient compression algorithms are crucial for monitoring with camera traps that have limited access to power, storage, and bandwidth. In this article, we introduce a new motion analysis-based video compression algorithm designed to run on camera trap devices. We implemented and tested this algorithm using a case study of insect-pollinator motion tracking. The algorithm identifies and stores only image regions depicting motion relevant to pollination monitoring, reducing the overall data size by an average of 84% across a diverse set of test datasets while retaining the information necessary for relevant behavioural analysis. The methods outlined in this paper facilitate the broader application of computer vision-enabled, low-powered camera trap devices for remote, in-situ video-based animal motion monitoring.
Related papers
- Image Conductor: Precision Control for Interactive Video Synthesis [90.2353794019393]
Filmmaking and animation production often require sophisticated techniques for coordinating camera transitions and object movements.
Image Conductor is a method for precise control of camera transitions and object movements to generate video assets from a single image.
arXiv Detail & Related papers (2024-06-21T17:55:05Z) - Enabling Cross-Camera Collaboration for Video Analytics on Distributed
Smart Cameras [7.609628915907225]
We present Argus, a distributed video analytics system with cross-camera collaboration on smart cameras.
We identify multi-camera, multi-target tracking as the primary task multi-camera video analytics and develop a novel technique that avoids redundant, processing-heavy tasks.
Argus reduces the number of object identifications and end-to-end latency by up to 7.13x and 2.19x compared to the state-of-the-art.
arXiv Detail & Related papers (2024-01-25T12:27:03Z) - Scalable and Real-time Multi-Camera Vehicle Detection,
Re-Identification, and Tracking [58.95210121654722]
We propose a real-time city-scale multi-camera vehicle tracking system that handles real-world, low-resolution CCTV instead of idealized and curated video streams.
Our method is ranked among the top five performers on the public leaderboard.
arXiv Detail & Related papers (2022-04-15T12:47:01Z) - Implicit Motion Handling for Video Camouflaged Object Detection [60.98467179649398]
We propose a new video camouflaged object detection (VCOD) framework.
It can exploit both short-term and long-term temporal consistency to detect camouflaged objects from video frames.
arXiv Detail & Related papers (2022-03-14T17:55:41Z) - Argus++: Robust Real-time Activity Detection for Unconstrained Video
Streams with Overlapping Cube Proposals [85.76513755331318]
Argus++ is a robust real-time activity detection system for analyzing unconstrained video streams.
The overall system is optimized for real-time processing on standalone consumer-level hardware.
arXiv Detail & Related papers (2022-01-14T03:35:22Z) - Video Salient Object Detection via Contrastive Features and Attention
Modules [106.33219760012048]
We propose a network with attention modules to learn contrastive features for video salient object detection.
A co-attention formulation is utilized to combine the low-level and high-level features.
We show that the proposed method requires less computation, and performs favorably against the state-of-the-art approaches.
arXiv Detail & Related papers (2021-11-03T17:40:32Z) - Moving Object Detection for Event-based vision using Graph Spectral
Clustering [6.354824287948164]
Moving object detection has been a central topic of discussion in computer vision for its wide range of applications.
We present an unsupervised Graph Spectral Clustering technique for Moving Object Detection in Event-based data.
We additionally show how the optimum number of moving objects can be automatically determined.
arXiv Detail & Related papers (2021-09-30T10:19:22Z) - Moving Object Detection for Event-based Vision using k-means Clustering [0.0]
Moving object detection is a crucial task in computer vision.
Event-based cameras are bio-inspired cameras that work by mimicking the working of the human eye.
In this paper, we investigate the application of the k-means clustering technique in detecting moving objects in event-based data.
arXiv Detail & Related papers (2021-09-04T14:43:14Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z) - Robust and efficient post-processing for video object detection [9.669942356088377]
This work introduces a novel post-processing pipeline that overcomes some of the limitations of previous post-processing methods.
Our method improves the results of state-of-the-art specific video detectors, specially regarding fast moving objects.
And applied to efficient still image detectors, such as YOLO, provides comparable results to much more computationally intensive detectors.
arXiv Detail & Related papers (2020-09-23T10:47:24Z) - CONVINCE: Collaborative Cross-Camera Video Analytics at the Edge [1.5469452301122173]
This paper introduces CONVINCE, a new approach to look at cameras as a collective entity that enables collaborative video analytics pipeline among cameras.
Our results demonstrate that CONVINCE achieves an object identification accuracy of $sim$91%, by transmitting only about $sim$25% of all the recorded frames.
arXiv Detail & Related papers (2020-02-05T23:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.