Knife and Threat Detectors
- URL: http://arxiv.org/abs/2004.03366v2
- Date: Wed, 8 Apr 2020 14:36:29 GMT
- Title: Knife and Threat Detectors
- Authors: David A. Noever, Sam E. Miller Noever
- Abstract summary: We present three complementary methods for scoring automated threat identification using multiple knife image datasets.
To alert an observer to the knife-wielding threat, we test and deploy classification built around MobileNet.
A final model built on the PoseNet architecture assigns anatomical waypoints or skeletal features to narrow the threat characteristics.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Despite rapid advances in image-based machine learning, the threat
identification of a knife wielding attacker has not garnered substantial
academic attention. This relative research gap appears less understandable
given the high knife assault rate (>100,000 annually) and the increasing
availability of public video surveillance to analyze and forensically document.
We present three complementary methods for scoring automated threat
identification using multiple knife image datasets, each with the goal of
narrowing down possible assault intentions while minimizing misidentifying
false positives and risky false negatives. To alert an observer to the
knife-wielding threat, we test and deploy classification built around MobileNet
in a sparse and pruned neural network with a small memory requirement (< 2.2
megabytes) and 95% test accuracy. We secondly train a detection algorithm
(MaskRCNN) to segment the hand from the knife in a single image and assign
probable certainty to their relative location. This segmentation accomplishes
both localization with bounding boxes but also relative positions to infer
overhand threats. A final model built on the PoseNet architecture assigns
anatomical waypoints or skeletal features to narrow the threat characteristics
and reduce misunderstood intentions. We further identify and supplement
existing data gaps that might blind a deployed knife threat detector such as
collecting innocuous hand and fist images as important negative training sets.
When automated on commodity hardware and software solutions one original
research contribution is this systematic survey of timely and readily available
image-based alerts to task and prioritize crime prevention countermeasures
prior to a tragic outcome.
Related papers
- Neural Fingerprints for Adversarial Attack Detection [2.7309692684728613]
A well known vulnerability of deep learning models is their susceptibility to adversarial examples.
Many algorithms have been proposed to address this problem, falling generally into one of two categories.
We argue that in a white-box setting, where the attacker knows the configuration and weights of the network and the detector, they can overcome the detector.
This problem is common in security applications where even a very good model is not sufficient to ensure safety.
arXiv Detail & Related papers (2024-11-07T08:43:42Z) - Undermining Image and Text Classification Algorithms Using Adversarial Attacks [0.0]
Our study addresses the gap by training various machine learning models and using GANs and SMOTE to generate additional data points aimed at attacking text classification models.
Our experiments reveal a significant vulnerability in classification models. Specifically, we observe a 20 % decrease in accuracy for the top-performing text classification models post-attack, along with a 30 % decrease in facial recognition accuracy.
arXiv Detail & Related papers (2024-11-03T18:44:28Z) - Unlearn and Burn: Adversarial Machine Unlearning Requests Destroy Model Accuracy [65.80757820884476]
We expose a critical yet underexplored vulnerability in the deployment of unlearning systems.
We present a threat model where an attacker can degrade model accuracy by submitting adversarial unlearning requests for data not present in the training set.
We evaluate various verification mechanisms to detect the legitimacy of unlearning requests and reveal the challenges in verification.
arXiv Detail & Related papers (2024-10-12T16:47:04Z) - Detecting Adversarial Attacks in Semantic Segmentation via Uncertainty Estimation: A Deep Analysis [12.133306321357999]
We propose an uncertainty-based method for detecting adversarial attacks on neural networks for semantic segmentation.
We conduct a detailed analysis of uncertainty-based detection of adversarial attacks and various state-of-the-art neural networks.
Our numerical experiments show the effectiveness of the proposed uncertainty-based detection method.
arXiv Detail & Related papers (2024-08-19T14:13:30Z) - CCTV-Gun: Benchmarking Handgun Detection in CCTV Images [59.24281591714385]
Gun violence is a critical security problem, and it is imperative for the computer vision community to develop effective gun detection algorithms.
detecting guns in real-world CCTV images remains a challenging and under-explored task.
We present a benchmark, called textbfCCTV-Gun, which addresses the challenges of detecting handguns in real-world CCTV images.
arXiv Detail & Related papers (2023-03-19T16:17:35Z) - Zero Day Threat Detection Using Graph and Flow Based Security Telemetry [3.3029515721630855]
Zero Day Threats (ZDT) are novel methods used by malicious actors to attack and exploit information technology (IT) networks or infrastructure.
In this paper, we introduce a deep learning based approach to Zero Day Threat detection that can generalize, scale, and effectively identify threats in near real-time.
arXiv Detail & Related papers (2022-05-04T19:30:48Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Hidden Backdoor Attack against Semantic Segmentation Models [60.0327238844584]
The emphbackdoor attack intends to embed hidden backdoors in deep neural networks (DNNs) by poisoning training data.
We propose a novel attack paradigm, the emphfine-grained attack, where we treat the target label from the object-level instead of the image-level.
Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data.
arXiv Detail & Related papers (2021-03-06T05:50:29Z) - Detection of Adversarial Supports in Few-shot Classifiers Using Feature
Preserving Autoencoders and Self-Similarity [89.26308254637702]
We propose a detection strategy to highlight adversarial support sets.
We make use of feature preserving autoencoder filtering and also the concept of self-similarity of a support set to perform this detection.
Our method is attack-agnostic and also the first to explore detection for few-shot classifiers to the best of our knowledge.
arXiv Detail & Related papers (2020-12-09T14:13:41Z) - Miss the Point: Targeted Adversarial Attack on Multiple Landmark
Detection [29.83857022733448]
This paper is the first to study how fragile a CNN-based model on multiple landmark detection to adversarial perturbations.
We propose a novel Adaptive Targeted Iterative FGSM attack against the state-of-the-art models in multiple landmark detection.
arXiv Detail & Related papers (2020-07-10T07:58:35Z) - Detection as Regression: Certified Object Detection by Median Smoothing [50.89591634725045]
This work is motivated by recent progress on certified classification by randomized smoothing.
We obtain the first model-agnostic, training-free, and certified defense for object detection against $ell$-bounded attacks.
arXiv Detail & Related papers (2020-07-07T18:40:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.