Supervised Machine Learning for Effective Missile Launch Based on Beyond
Visual Range Air Combat Simulations
- URL: http://arxiv.org/abs/2207.04188v1
- Date: Sat, 9 Jul 2022 04:06:00 GMT
- Title: Supervised Machine Learning for Effective Missile Launch Based on Beyond
Visual Range Air Combat Simulations
- Authors: Joao P. A. Dantas, Andre N. Costa, Felipe L. L. Medeiros, Diego
Geraldo, Marcos R. O. A. Maximo and Takashi Yoneyama
- Abstract summary: We use resampling techniques to improve the predictive model, analyzing accuracy, precision, recall, and f1-score.
The models with the best f1-score brought values of 0.379 and 0.465 without and with the resampling technique, respectively, which is an increase of 22.69%.
It is possible to develop decision support tools based on machine learning models, which may improve the flight quality in BVR air combat.
- Score: 0.19573380763700707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work compares supervised machine learning methods using reliable data
from constructive simulations to estimate the most effective moment for
launching missiles during air combat. We employed resampling techniques to
improve the predictive model, analyzing accuracy, precision, recall, and
f1-score. Indeed, we could identify the remarkable performance of the models
based on decision trees and the significant sensitivity of other algorithms to
resampling techniques. The models with the best f1-score brought values of
0.379 and 0.465 without and with the resampling technique, respectively, which
is an increase of 22.69%. Thus, if desirable, resampling techniques can improve
the model's recall and f1-score with a slight decline in accuracy and
precision. Therefore, through data obtained through constructive simulations,
it is possible to develop decision support tools based on machine learning
models, which may improve the flight quality in BVR air combat, increasing the
effectiveness of offensive missions to hit a particular target.
Related papers
- Enhancing Object Detection Accuracy in Autonomous Vehicles Using Synthetic Data [0.8267034114134277]
Performance of machine learning models depends on the nature and size of the training data sets.
High-quality, diverse, relevant and representative training data is essential to build accurate and reliable machine learning models.
It is hypothesised that well-designed synthetic data can improve the performance of a machine learning algorithm.
arXiv Detail & Related papers (2024-11-23T16:38:02Z) - Flying Quadrotors in Tight Formations using Learning-based Model Predictive Control [30.715469693232492]
In this work, we propose a framework that combines the benefits of first-principles modeling and data-driven approaches.
We show that incorporating the model into a novel learning-based predictive model control framework results in substantial performance improvements.
Our framework also achieves exceptional sample efficiency, using only a total of 46 seconds of flight data for training.
arXiv Detail & Related papers (2024-10-13T05:03:16Z) - A Cost-Aware Approach to Adversarial Robustness in Neural Networks [1.622320874892682]
We propose using accelerated failure time models to measure the effect of hardware choice, batch size, number of epochs, and test-set accuracy.
We evaluate several GPU types and use the Tree Parzen Estimator to maximize model robustness and minimize model run-time simultaneously.
arXiv Detail & Related papers (2024-09-11T20:43:59Z) - Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness [52.9493817508055]
We propose Pre-trained Model Guided Adversarial Fine-Tuning (PMG-AFT) to enhance the model's zero-shot adversarial robustness.
Our approach consistently improves clean accuracy by an average of 8.72%.
arXiv Detail & Related papers (2024-01-09T04:33:03Z) - Learn from the Past: A Proxy Guided Adversarial Defense Framework with
Self Distillation Regularization [53.04697800214848]
Adversarial Training (AT) is pivotal in fortifying the robustness of deep learning models.
AT methods, relying on direct iterative updates for target model's defense, frequently encounter obstacles such as unstable training and catastrophic overfitting.
We present a general proxy guided defense framework, LAST' (bf Learn from the Pbf ast)
arXiv Detail & Related papers (2023-10-19T13:13:41Z) - Utilizing Explainable AI for improving the Performance of Neural
Networks [6.670483888835783]
We propose a retraining pipeline that consistently improves the model predictions starting from XAI.
In order to benchmark our method, we evaluate it on both real-life and public datasets.
Experiments using the SHAP-based retraining approach achieve a 4% more accuracy w.r.t. the standard equal weight retraining for people counting tasks.
arXiv Detail & Related papers (2022-10-07T09:39:20Z) - Robust Trajectory Prediction against Adversarial Attacks [84.10405251683713]
Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving systems.
These methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions.
In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks.
arXiv Detail & Related papers (2022-07-29T22:35:05Z) - Data-Efficient Modeling for Precise Power Consumption Estimation of
Quadrotor Operations Using Ensemble Learning [3.722516004544342]
Electric Take-Off and Landing (eVTOL) aircraft is considered as the major aircraft type in the emerging urban air mobility.
In this study, a framework for power consumption modeling of eVTOL aircraft was established.
We employed an ensemble learning method, namely stacking, to develop a data-driven model using flight records of three different types of quadrotors.
arXiv Detail & Related papers (2022-05-23T02:16:43Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z) - Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning [134.15174177472807]
We introduce adversarial training into self-supervision, to provide general-purpose robust pre-trained models for the first time.
We conduct extensive experiments to demonstrate that the proposed framework achieves large performance margins.
arXiv Detail & Related papers (2020-03-28T18:28:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.