From Lab to Field: Real-World Evaluation of an AI-Driven Smart Video Solution to Enhance Community Safety
- URL: http://arxiv.org/abs/2312.02078v2
- Date: Wed, 4 Sep 2024 00:06:20 GMT
- Title: From Lab to Field: Real-World Evaluation of an AI-Driven Smart Video Solution to Enhance Community Safety
- Authors: Shanle Yao, Babak Rahimi Ardabili, Armin Danesh Pazho, Ghazal Alinezhad Noghre, Christopher Neff, Lauren Bourque, Hamed Tabkhi,
- Abstract summary: This article adopts and evaluates an AI-enabled Smart Video Solution (SVS) designed to enhance safety in the real world.
The system integrates with existing infrastructure camera networks, leveraging recent advancements in AI for easy adoption.
The article evaluates the end-to-end latency from the moment an AI algorithm detects anomalous behavior in real-time at the camera level to the time stakeholders receive a notification.
- Score: 1.7904189757601403
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article adopts and evaluates an AI-enabled Smart Video Solution (SVS) designed to enhance safety in the real world. The system integrates with existing infrastructure camera networks, leveraging recent advancements in AI for easy adoption. Prioritizing privacy and ethical standards, pose based data is used for downstream AI tasks such as anomaly detection. Cloud-based infrastructure and mobile app are deployed, enabling real-time alerts within communities. The SVS employs innovative data representation and visualization techniques, such as the Occupancy Indicator, Statistical Anomaly Detection, Bird's Eye View, and Heatmaps, to understand pedestrian behaviors and enhance public safety. Evaluation of the SVS demonstrates its capacity to convert complex computer vision outputs into actionable insights for stakeholders, community partners, law enforcement, urban planners, and social scientists. This article presents a comprehensive real-world deployment and evaluation of the SVS, implemented in a community college environment across 16 cameras. The system integrates AI-driven visual processing, supported by statistical analysis, database management, cloud communication, and user notifications. Additionally, the article evaluates the end-to-end latency from the moment an AI algorithm detects anomalous behavior in real-time at the camera level to the time stakeholders receive a notification. The results demonstrate the system's robustness, effectively managing 16 CCTV cameras with a consistent throughput of 16.5 frames per second (FPS) over a 21-hour period and an average end-to-end latency of 26.76 seconds between anomaly detection and alert issuance.
Related papers
- YOLORe-IDNet: An Efficient Multi-Camera System for Person-Tracking [2.5761958263376745]
We propose a person-tracking system that combines correlation filters and Intersection Over Union (IOU) constraints for robust tracking.
The proposed system quickly identifies and tracks suspect in real-time across multiple cameras.
It is computationally efficient and achieves a high F1-Score of 79% and an IOU of 59% comparable to existing state-of-the-art algorithms.
arXiv Detail & Related papers (2023-09-23T14:11:13Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Real-World Community-in-the-Loop Smart Video Surveillance -- A Case
Study at a Community College [2.4956060473718407]
This paper presents a case study for designing and deploying smart video surveillance systems based on a real-world testbed at a community college.
We focus on a smart camera-based system that can identify suspicious/abnormal activities and alert the stakeholders and residents immediately.
The system can run eight cameras simultaneously at a 32.41 Frame Per Second (FPS) rate.
arXiv Detail & Related papers (2023-03-22T22:16:17Z) - Understanding Policy and Technical Aspects of AI-Enabled Smart Video
Surveillance to Address Public Safety [2.2427353485837545]
This paper identifies the privacy concerns and requirements needed to address when designing AI-enabled smart video surveillance.
We propose the first end-to-end AI-enabled privacy-preserving smart video surveillance system that holistically combines computer vision analytics, statistical data analytics, cloud-native services, and end-user applications.
arXiv Detail & Related papers (2023-02-08T19:54:35Z) - Scalable Vehicle Re-Identification via Self-Supervision [66.2562538902156]
Vehicle Re-Identification is one of the key elements in city-scale vehicle analytics systems.
Many state-of-the-art solutions for vehicle re-id mostly focus on improving the accuracy on existing re-id benchmarks and often ignore computational complexity.
We propose a simple yet effective hybrid solution empowered by self-supervised training which only uses a single network during inference time.
arXiv Detail & Related papers (2022-05-16T12:14:42Z) - Robust Semi-supervised Federated Learning for Images Automatic
Recognition in Internet of Drones [57.468730437381076]
We present a Semi-supervised Federated Learning (SSFL) framework for privacy-preserving UAV image recognition.
There are significant differences in the number, features, and distribution of local data collected by UAVs using different camera modules.
We propose an aggregation rule based on the frequency of the client's participation in training, namely the FedFreq aggregation rule.
arXiv Detail & Related papers (2022-01-03T16:49:33Z) - Feeling of Presence Maximization: mmWave-Enabled Virtual Reality Meets
Deep Reinforcement Learning [76.46530937296066]
This paper investigates the problem of providing ultra-reliable and energy-efficient virtual reality (VR) experiences for wireless mobile users.
To ensure reliable ultra-high-definition (UHD) video frame delivery to mobile users, a coordinated multipoint (CoMP) transmission technique and millimeter wave (mmWave) communications are exploited.
arXiv Detail & Related papers (2021-06-03T08:35:10Z) - Computer Vision-based Social Distancing Surveillance Solution with
Optional Automated Camera Calibration for Large Scale Deployment [0.0]
We describe a computer vision-based AI-assisted solution to aid compliance with social distancing norms.
The solution consists of modules to detect and track people and to identify distance violations.
arXiv Detail & Related papers (2021-04-22T06:43:02Z) - AEGIS: A real-time multimodal augmented reality computer vision based
system to assist facial expression recognition for individuals with autism
spectrum disorder [93.0013343535411]
This paper presents the development of a multimodal augmented reality (AR) system which combines the use of computer vision and deep convolutional neural networks (CNN)
The proposed system, which we call AEGIS, is an assistive technology deployable on a variety of user devices including tablets, smartphones, video conference systems, or smartglasses.
We leverage both spatial and temporal information in order to provide an accurate expression prediction, which is then converted into its corresponding visualization and drawn on top of the original video frame.
arXiv Detail & Related papers (2020-10-22T17:20:38Z) - DeepSOCIAL: Social Distancing Monitoring and Infection Risk Assessment
in COVID-19 Pandemic [1.027974860479791]
Social distancing is a recommended solution by the World Health Organisation (WHO) to minimise the spread of COVID-19 in public places.
We develop a hybrid Computer Vision and YOLOv4-based Deep Neural Network model for automated people detection in the crowd using common CCTV cameras.
The developed model is a generic and accurate people detection and tracking solution that can be applied in many other fields.
arXiv Detail & Related papers (2020-08-26T16:56:57Z) - Identity-Aware Attribute Recognition via Real-Time Distributed Inference
in Mobile Edge Clouds [53.07042574352251]
We design novel models for pedestrian attribute recognition with re-ID in an MEC-enabled camera monitoring system.
We propose a novel inference framework with a set of distributed modules, by jointly considering the attribute recognition and person re-ID.
We then devise a learning-based algorithm for the distributions of the modules of the proposed distributed inference framework.
arXiv Detail & Related papers (2020-08-12T12:03:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.