Real Time Deep Learning Weapon Detection Techniques for Mitigating Lone Wolf Attacks
- URL: http://arxiv.org/abs/2405.14148v1
- Date: Thu, 23 May 2024 03:48:26 GMT
- Title: Real Time Deep Learning Weapon Detection Techniques for Mitigating Lone Wolf Attacks
- Authors: Kambhatla Akhila, Khaled R Ahmed,
- Abstract summary: This research focuses on (You Look Only Once) family and Faster RCNN family for model validation and training.
Models achieve the highest score of 78% with an inference speed of 8.1ms.
However, Faster RCNN models achieve the highest AP 89%.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Firearm Shootings and stabbings attacks are intense and result in severe trauma and threat to public safety. Technology is needed to prevent lone-wolf attacks without human supervision. Hence designing an automatic weapon detection using deep learning, is an optimized solution to localize and detect the presence of weapon objects using Neural Networks. This research focuses on both unified and II-stage object detectors whose resultant model not only detects the presence of weapons but also classifies with respective to its weapon classes, including handgun, knife, revolver, and rifle, along with person detection. This research focuses on (You Look Only Once) family and Faster RCNN family for model validation and training. Pruning and Ensembling techniques were applied to YOLOv5 to enhance their speed and performance. models achieve the highest score of 78% with an inference speed of 8.1ms. However, Faster R-CNN models achieve the highest AP 89%.
Related papers
- Undermining Image and Text Classification Algorithms Using Adversarial Attacks [0.0]
Our study addresses the gap by training various machine learning models and using GANs and SMOTE to generate additional data points aimed at attacking text classification models.
Our experiments reveal a significant vulnerability in classification models. Specifically, we observe a 20 % decrease in accuracy for the top-performing text classification models post-attack, along with a 30 % decrease in facial recognition accuracy.
arXiv Detail & Related papers (2024-11-03T18:44:28Z) - Real-Time Weapon Detection Using YOLOv8 for Enhanced Safety [0.0]
The model was trained on a comprehensive dataset containing thousands of images depicting various types of firearms and edged weapons.
We evaluated the model's performance using key metrics such as precision, recall, F1-score, and mean Average Precision (mAP) across multiple Intersection over Union (IoU) thresholds.
arXiv Detail & Related papers (2024-10-23T10:35:51Z) - Distributed Intelligent Video Surveillance for Early Armed Robbery Detection based on Deep Learning [0.0]
Low employment rates in Latin America have contributed to a substantial rise in crime, prompting the emergence of new criminal tactics.
Recent research has approached the problem by embedding weapon detectors in surveillance cameras.
These systems are prone to false positives if no counterpart confirms the event.
We present a distributed IoT system that integrates a computer vision pipeline and object detection capabilities into multiple end-devices.
arXiv Detail & Related papers (2024-10-13T05:20:35Z) - Unlearn and Burn: Adversarial Machine Unlearning Requests Destroy Model Accuracy [65.80757820884476]
We expose a critical yet underexplored vulnerability in the deployment of unlearning systems.
We present a threat model where an attacker can degrade model accuracy by submitting adversarial unlearning requests for data not present in the training set.
We evaluate various verification mechanisms to detect the legitimacy of unlearning requests and reveal the challenges in verification.
arXiv Detail & Related papers (2024-10-12T16:47:04Z) - SOAR: Self-supervision Optimized UAV Action Recognition with Efficient Object-Aware Pretraining [65.9024395309316]
We introduce a novel Self-supervised pretraining algorithm for aerial footage captured by Unmanned Aerial Vehicles (UAVs)
We incorporate human object knowledge throughout the pretraining process to enhance UAV video pretraining efficiency and downstream action recognition performance.
arXiv Detail & Related papers (2024-09-26T21:15:22Z) - Isolation and Induction: Training Robust Deep Neural Networks against
Model Stealing Attacks [51.51023951695014]
Existing model stealing defenses add deceptive perturbations to the victim's posterior probabilities to mislead the attackers.
This paper proposes Isolation and Induction (InI), a novel and effective training framework for model stealing defenses.
In contrast to adding perturbations over model predictions that harm the benign accuracy, we train models to produce uninformative outputs against stealing queries.
arXiv Detail & Related papers (2023-08-02T05:54:01Z) - Publishing Efficient On-device Models Increases Adversarial
Vulnerability [58.6975494957865]
In this paper, we study the security considerations of publishing on-device variants of large-scale models.
We first show that an adversary can exploit on-device models to make attacking the large models easier.
We then show that the vulnerability increases as the similarity between a full-scale and its efficient model increase.
arXiv Detail & Related papers (2022-12-28T05:05:58Z) - AccelAT: A Framework for Accelerating the Adversarial Training of Deep
Neural Networks through Accuracy Gradient [12.118084418840152]
Adrial training is exploited to develop a robust Deep Neural Network (DNN) model against malicious altered data.
This paper aims at accelerating the adversarial training to enable fast development of robust DNN models against adversarial attacks.
arXiv Detail & Related papers (2022-10-13T10:31:51Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Robust Adversarial Attacks Detection based on Explainable Deep
Reinforcement Learning For UAV Guidance and Planning [4.640835690336653]
Adversarial attacks on Uncrewed Aerial Vehicle (UAV) agents operating in public are increasing.
Deep Learning (DL) approaches to control and guide these UAVs can be beneficial in terms of performance but can add concerns regarding the safety of those techniques and their vulnerability against adversarial attacks.
This paper proposes an innovative approach based on the explainability of DL methods to build an efficient detector that will protect these DL schemes and the UAVs adopting them from attacks.
arXiv Detail & Related papers (2022-06-06T15:16:10Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.