Threat Detection for General Social Engineering Attack Using Machine
Learning Techniques
- URL: http://arxiv.org/abs/2203.07933v2
- Date: Thu, 17 Mar 2022 00:01:19 GMT
- Title: Threat Detection for General Social Engineering Attack Using Machine
Learning Techniques
- Authors: Zuoguang Wang, Yimo Ren, Hongsong Zhu, Limin Sun
- Abstract summary: This paper explores the threat detection for general Social Engineering (SE) attack using Machine Learning (ML) techniques.
The experimental results and analyses show that: 1) the ML techniques are feasible in detecting general SE attacks and some ML models are quite effective; ML-based SE threat detection is complementary with KG-based approaches.
- Score: 7.553860996595933
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper explores the threat detection for general Social Engineering (SE)
attack using Machine Learning (ML) techniques, rather than focusing on or
limited to a specific SE attack type, e.g. email phishing. Firstly, this paper
processes and obtains more SE threat data from the previous Knowledge Graph
(KG), and then extracts different threat features and generates new datasets
corresponding with three different feature combinations. Finally, 9 types of ML
models are created and trained using the three datasets, respectively, and
their performance are compared and analyzed with 27 threat detectors and 270
times of experiments. The experimental results and analyses show that: 1) the
ML techniques are feasible in detecting general SE attacks and some ML models
are quite effective; ML-based SE threat detection is complementary with
KG-based approaches; 2) the generated datasets are usable and the SE domain
ontology proposed in previous work can dissect SE attacks and deliver the SE
threat features, allowing it to be used as a data model for future research.
Besides, more conclusions and analyses about the characteristics of different
ML detectors and the datasets are discussed.
Related papers
- Investigating Adversarial Attacks in Software Analytics via Machine Learning Explainability [11.16693333878553]
This study investigates the relationship between ML explainability and adversarial attacks to measure the robustness of ML models in software analytics tasks.
Our experiments, involving six datasets, three ML explainability techniques, and seven ML models, demonstrate that ML explainability can be used to conduct successful adversarial attacks on ML models in software analytics tasks.
arXiv Detail & Related papers (2024-08-07T23:21:55Z) - Impacts of Data Preprocessing and Hyperparameter Optimization on the Performance of Machine Learning Models Applied to Intrusion Detection Systems [0.8388591755871736]
Intrusion Detection Systems (IDS) have been continuously improved.
Many of them incorporate machine learning (ML) techniques to identify threats.
This article aims to present a study that fills this research gap.
arXiv Detail & Related papers (2024-07-15T14:30:25Z) - Attack Tree Analysis for Adversarial Evasion Attacks [1.0442919217572477]
It is necessary to analyze the risk of ML-specific attacks in introducing ML base systems.
In this study, we propose a quantitative evaluation method for analyzing the risk of evasion attacks using attack trees.
arXiv Detail & Related papers (2023-12-28T11:02:37Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Can Adversarial Examples Be Parsed to Reveal Victim Model Information? [62.814751479749695]
In this work, we ask whether it is possible to infer data-agnostic victim model (VM) information from data-specific adversarial instances.
We collect a dataset of adversarial attacks across 7 attack types generated from 135 victim models.
We show that a simple, supervised model parsing network (MPN) is able to infer VM attributes from unseen adversarial attacks.
arXiv Detail & Related papers (2023-03-13T21:21:49Z) - Benchmarking Machine Learning Robustness in Covid-19 Genome Sequence
Classification [109.81283748940696]
We introduce several ways to perturb SARS-CoV-2 genome sequences to mimic the error profiles of common sequencing platforms such as Illumina and PacBio.
We show that some simulation-based approaches are more robust (and accurate) than others for specific embedding methods to certain adversarial attacks to the input sequences.
arXiv Detail & Related papers (2022-07-18T19:16:56Z) - Threat Assessment in Machine Learning based Systems [12.031113181911627]
We conduct an empirical study of threats reported against Machine Learning-based systems.
The study is based on 89 real-world ML attack scenarios from the MITRE's ATLAS database, the AI Incident Database, and the literature.
Results show that convolutional neural networks were one of the most targeted models among the attack scenarios.
arXiv Detail & Related papers (2022-06-30T20:19:50Z) - CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of
Adversarial Robustness of Vision Models [61.68061613161187]
This paper presents CARLA-GeAR, a tool for the automatic generation of synthetic datasets for evaluating the robustness of neural models against physical adversarial patches.
The tool is built on the CARLA simulator, using its Python API, and allows the generation of datasets for several vision tasks in the context of autonomous driving.
The paper presents an experimental study to evaluate the performance of some defense methods against such attacks, showing how the datasets generated with CARLA-GeAR might be used in future work as a benchmark for adversarial defense in the real world.
arXiv Detail & Related papers (2022-06-09T09:17:38Z) - NetSentry: A Deep Learning Approach to Detecting Incipient Large-scale
Network Attacks [9.194664029847019]
We show how to use Machine Learning for Network Intrusion Detection (NID) in a principled way.
We propose NetSentry, perhaps the first of its kind NIDS that builds on Bi-ALSTM, an original ensemble of sequential neural models.
We demonstrate F1 score gains above 33% over the state-of-the-art, as well as up to 3 times higher rates of detecting attacks such as XSS and web bruteforce.
arXiv Detail & Related papers (2022-02-20T17:41:02Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.