Ammunition Component Classification Using Deep Learning
- URL: http://arxiv.org/abs/2208.12863v1
- Date: Fri, 26 Aug 2022 20:42:39 GMT
- Title: Ammunition Component Classification Using Deep Learning
- Authors: Hadi Ghahremannezhad, Chengjun Liu, Hang Shi
- Abstract summary: Ammo scrap containing energetics is considered to be potentially dangerous and should be separated before the recycling process.
We have gathered a dataset of ammunition components with the goal of applying artificial intelligence for classifying safe and unsafe scrap pieces automatically.
- Score: 0.8808993671472349
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ammunition scrap inspection is an essential step in the process of recycling
ammunition metal scrap. Most ammunition is composed of a number of components,
including case, primer, powder, and projectile. Ammo scrap containing
energetics is considered to be potentially dangerous and should be separated
before the recycling process. Manually inspecting each piece of scrap is
tedious and time-consuming. We have gathered a dataset of ammunition components
with the goal of applying artificial intelligence for classifying safe and
unsafe scrap pieces automatically. First, two training datasets are manually
created from visual and x-ray images of ammo. Second, the x-ray dataset is
augmented using the spatial transforms of histogram equalization, averaging,
sharpening, power law, and Gaussian blurring in order to compensate for the
lack of sufficient training data. Lastly, the representative YOLOv4 object
detection method is applied to detect the ammo components and classify the
scrap pieces into safe and unsafe classes, respectively. The trained models are
tested against unseen data in order to evaluate the performance of the applied
method. The experiments demonstrate the feasibility of ammo component detection
and classification using deep learning. The datasets and the pre-trained models
are available at https://github.com/hadi-ghnd/Scrap-Classification.
Related papers
- Designing A Sustainable Marine Debris Clean-up Framework without Human Labels [0.0]
Marine debris poses a significant ecological threat to birds, fish, and other animal life.
Traditional methods for assessing debris accumulation involve labor-intensive and costly manual surveys.
This study introduces a framework that utilizes aerial imagery captured by drones to conduct remote trash surveys.
arXiv Detail & Related papers (2024-05-23T17:28:23Z) - Visual Context-Aware Person Fall Detection [52.49277799455569]
We present a segmentation pipeline to semi-automatically separate individuals and objects in images.
Background objects such as beds, chairs, or wheelchairs can challenge fall detection systems, leading to false positive alarms.
We demonstrate that object-specific contextual transformations during training effectively mitigate this challenge.
arXiv Detail & Related papers (2024-04-11T19:06:36Z) - An End-to-End Framework For Universal Lesion Detection With Missing
Annotations [24.902835211573628]
We present a novel end-to-end framework for mining unlabeled lesions while simultaneously training the detector.
Our framework follows the teacher-student paradigm. High-confidence predictions are combined with partially-labeled ground truth for training the student model.
arXiv Detail & Related papers (2023-03-27T09:16:10Z) - Incremental-DETR: Incremental Few-Shot Object Detection via
Self-Supervised Learning [60.64535309016623]
We propose the Incremental-DETR that does incremental few-shot object detection via fine-tuning and self-supervised learning on the DETR object detector.
To alleviate severe over-fitting with few novel class data, we first fine-tune the class-specific components of DETR with self-supervision.
We further introduce a incremental few-shot fine-tuning strategy with knowledge distillation on the class-specific components of DETR to encourage the network in detecting novel classes without catastrophic forgetting.
arXiv Detail & Related papers (2022-05-09T05:08:08Z) - Proper Reuse of Image Classification Features Improves Object Detection [4.240984948137734]
A common practice in transfer learning is to initialize the downstream model weights by pre-training on a data-abundant upstream task.
Recent works show this is not strictly necessary under longer training regimes and provide recipes for training the backbone from scratch.
We show that an extreme form of knowledge preservation -- freezing the classifier-d backbone -- consistently improves many different detection models.
arXiv Detail & Related papers (2022-04-01T14:44:47Z) - Machine Unlearning: Learning, Polluting, and Unlearning for Spam Email [0.9176056742068814]
Several spam email detection methods exist, each of which employs a different algorithm to detect undesired spam emails.
Many attackers exploit the model by polluting the data, which are trained to the model in various ways.
Retraining is impractical in most cases as there is already a massive amount of data trained to the model in the past.
Unlearning is fast, easy to implement, easy to use, and effective.
arXiv Detail & Related papers (2021-11-26T12:13:11Z) - Accumulative Poisoning Attacks on Real-time Data [56.96241557830253]
We show that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
Our work validates that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
arXiv Detail & Related papers (2021-06-18T08:29:53Z) - Data Augmentation for Object Detection via Differentiable Neural
Rendering [71.00447761415388]
It is challenging to train a robust object detector when annotated data is scarce.
Existing approaches to tackle this problem include semi-supervised learning that interpolates labeled data from unlabeled data.
We introduce an offline data augmentation method for object detection, which semantically interpolates the training data with novel views.
arXiv Detail & Related papers (2021-03-04T06:31:06Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Detection as Regression: Certified Object Detection by Median Smoothing [50.89591634725045]
This work is motivated by recent progress on certified classification by randomized smoothing.
We obtain the first model-agnostic, training-free, and certified defense for object detection against $ell$-bounded attacks.
arXiv Detail & Related papers (2020-07-07T18:40:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.