Real-Time Weapon Detection Using YOLOv8 for Enhanced Safety
- URL: http://arxiv.org/abs/2410.19862v1
- Date: Wed, 23 Oct 2024 10:35:51 GMT
- Title: Real-Time Weapon Detection Using YOLOv8 for Enhanced Safety
- Authors: Ayush Thakur, Akshat Shrivastav, Rohan Sharma, Triyank Kumar, Kabir Puri,
- Abstract summary: The model was trained on a comprehensive dataset containing thousands of images depicting various types of firearms and edged weapons.
We evaluated the model's performance using key metrics such as precision, recall, F1-score, and mean Average Precision (mAP) across multiple Intersection over Union (IoU) thresholds.
- Score: 0.0
- License:
- Abstract: This research paper presents the development of an AI model utilizing YOLOv8 for real-time weapon detection, aimed at enhancing safety in public spaces such as schools, airports, and public transportation systems. As incidents of violence continue to rise globally, there is an urgent need for effective surveillance technologies that can quickly identify potential threats. Our approach focuses on leveraging advanced deep learning techniques to create a highly accurate and efficient system capable of detecting weapons in real-time video streams. The model was trained on a comprehensive dataset containing thousands of images depicting various types of firearms and edged weapons, ensuring a robust learning process. We evaluated the model's performance using key metrics such as precision, recall, F1-score, and mean Average Precision (mAP) across multiple Intersection over Union (IoU) thresholds, revealing a significant capability to differentiate between weapon and non-weapon classes with minimal error. Furthermore, we assessed the system's operational efficiency, demonstrating that it can process frames at high speeds suitable for real-time applications. The findings indicate that our YOLOv8-based weapon detection model not only contributes to the existing body of knowledge in computer vision but also addresses critical societal needs for improved safety measures in vulnerable environments. By harnessing the power of artificial intelligence, this research lays the groundwork for developing practical solutions that can be deployed in security settings, ultimately enhancing the protective capabilities of law enforcement and public safety agencies.
Related papers
- Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.
We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.
We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - Safety at Scale: A Comprehensive Survey of Large Model Safety [299.801463557549]
We present a comprehensive taxonomy of safety threats to large models, including adversarial attacks, data poisoning, backdoor attacks, jailbreak and prompt injection attacks, energy-latency attacks, data and model extraction attacks, and emerging agent-specific threats.
We identify and discuss the open challenges in large model safety, emphasizing the need for comprehensive safety evaluations, scalable and effective defense mechanisms, and sustainable data practices.
arXiv Detail & Related papers (2025-02-02T05:14:22Z) - Defining and Evaluating Physical Safety for Large Language Models [62.4971588282174]
Large Language Models (LLMs) are increasingly used to control robotic systems such as drones.
Their risks of causing physical threats and harm in real-world applications remain unexplored.
We classify the physical safety risks of drones into four categories: (1) human-targeted threats, (2) object-targeted threats, (3) infrastructure attacks, and (4) regulatory violations.
arXiv Detail & Related papers (2024-11-04T17:41:25Z) - Real Time Deep Learning Weapon Detection Techniques for Mitigating Lone Wolf Attacks [0.0]
This research focuses on (You Look Only Once) family and Faster RCNN family for model validation and training.
Models achieve the highest score of 78% with an inference speed of 8.1ms.
However, Faster RCNN models achieve the highest AP 89%.
arXiv Detail & Related papers (2024-05-23T03:48:26Z) - Highlighting the Safety Concerns of Deploying LLMs/VLMs in Robotics [54.57914943017522]
We highlight the critical issues of robustness and safety associated with integrating large language models (LLMs) and vision-language models (VLMs) into robotics applications.
arXiv Detail & Related papers (2024-02-15T22:01:45Z) - Transfer Learning-based Real-time Handgun Detection [0.0]
This study employs convolutional neural networks and transfer learning to develop a real-time computer vision system for automatic handgun detection.
The proposed system achieves a precision rate of 84.74%, demonstrating promising performance comparable to related works.
arXiv Detail & Related papers (2023-11-22T18:09:42Z) - GCNIDS: Graph Convolutional Network-Based Intrusion Detection System for CAN Bus [0.0]
We present an innovative approach to intruder detection within the CAN bus, leveraging Graph Convolutional Network (GCN) techniques.
Our experimental findings substantiate that the proposed GCN-based method surpasses existing IDSs in terms of accuracy, precision, and recall.
Our proposed approach holds significant potential in fortifying the security and safety of modern vehicles.
arXiv Detail & Related papers (2023-09-18T21:42:09Z) - Video Violence Recognition and Localization using a Semi-Supervised
Hard-Attention Model [0.0]
Violence monitoring and surveillance systems could keep communities safe and save lives.
The current state-of-the-art deep learning approaches to video violence recognition to higher levels of accuracy and performance could enable surveillance systems to be more reliable and scalable.
The main contribution of the proposed deep reinforcement learning method is to achieve state-of-the-art accuracy on RWF, Hockey, and Movies datasets.
arXiv Detail & Related papers (2022-02-04T16:15:26Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.