Phishing Attacks Detection -- A Machine Learning-Based Approach
- URL: http://arxiv.org/abs/2201.10752v1
- Date: Wed, 26 Jan 2022 05:08:27 GMT
- Title: Phishing Attacks Detection -- A Machine Learning-Based Approach
- Authors: Fatima Salahdine, Zakaria El Mrabet, Naima Kaabouch
- Abstract summary: Phishing attacks are one of the most common social engineering attacks targeting users emails to fraudulently steal confidential and sensitive information.
In this paper, we proposed a phishing attack detection technique based on machine learning.
We collected and analyzed more than 4000 phishing emails targeting the email service of the University of North Dakota.
- Score: 0.6445605125467573
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Phishing attacks are one of the most common social engineering attacks
targeting users emails to fraudulently steal confidential and sensitive
information. They can be used as a part of more massive attacks launched to
gain a foothold in corporate or government networks. Over the last decade, a
number of anti-phishing techniques have been proposed to detect and mitigate
these attacks. However, they are still inefficient and inaccurate. Thus, there
is a great need for efficient and accurate detection techniques to cope with
these attacks. In this paper, we proposed a phishing attack detection technique
based on machine learning. We collected and analyzed more than 4000 phishing
emails targeting the email service of the University of North Dakota. We
modeled these attacks by selecting 10 relevant features and building a large
dataset. This dataset was used to train, validate, and test the machine
learning algorithms. For performance evaluation, four metrics have been used,
namely probability of detection, probability of miss-detection, probability of
false alarm, and accuracy. The experimental results show that better detection
can be achieved using an artificial neural network.
Related papers
- Unlearn and Burn: Adversarial Machine Unlearning Requests Destroy Model Accuracy [65.80757820884476]
We expose a critical yet underexplored vulnerability in the deployment of unlearning systems.
We present a threat model where an attacker can degrade model accuracy by submitting adversarial unlearning requests for data not present in the training set.
We evaluate various verification mechanisms to detect the legitimacy of unlearning requests and reveal the challenges in verification.
arXiv Detail & Related papers (2024-10-12T16:47:04Z) - EaTVul: ChatGPT-based Evasion Attack Against Software Vulnerability Detection [19.885698402507145]
Adversarial examples can exploit vulnerabilities within deep neural networks.
This study showcases the susceptibility of deep learning models to adversarial attacks, which can achieve 100% attack success rate.
arXiv Detail & Related papers (2024-07-27T09:04:54Z) - A Sophisticated Framework for the Accurate Detection of Phishing Websites [0.0]
Phishing is an increasingly sophisticated form of cyberattack that is inflicting huge financial damage to corporations throughout the globe.
This paper proposes a comprehensive methodology for detecting phishing websites.
A combination of feature selection, greedy algorithm, cross-validation, and deep learning methods have been utilized to construct a sophisticated stacking ensemble.
arXiv Detail & Related papers (2024-03-13T14:26:25Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Profiler: Profile-Based Model to Detect Phishing Emails [15.109679047753355]
We propose a multidimensional risk assessment of emails to reduce the feasibility of an attacker adapting their email and avoiding detection.
We develop a risk assessment framework that includes three models which analyse an email's (1) threat level, (2) cognitive manipulation, and (3) email type.
Our Profiler can be used in conjunction with ML approaches, to reduce their misclassifications or as a labeller for large email data sets in the training stage.
arXiv Detail & Related papers (2022-08-18T10:01:55Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - An Adversarial Attack Analysis on Malicious Advertisement URL Detection
Framework [22.259444589459513]
Malicious advertisement URLs pose a security risk since they are the source of cyber-attacks.
Existing malicious URL detection techniques are limited and to handle unseen features as well as generalize to test data.
In this study, we extract a novel set of lexical and web-scrapped features and employ machine learning technique to set up system for fraudulent advertisement URLs detection.
arXiv Detail & Related papers (2022-04-27T20:06:22Z) - RAIDER: Reinforcement-aided Spear Phishing Detector [13.341666826984554]
Spear Phishing is a harmful cyber-attack facing business and individuals worldwide.
ML-based solutions may suffer from zero-day attacks; unseen attacks unaccounted for in the training data.
We propose RAIDER: Reinforcement AIded Spear Phishing DEtectoR.
arXiv Detail & Related papers (2021-05-17T02:42:54Z) - A Targeted Attack on Black-Box Neural Machine Translation with Parallel
Data Poisoning [60.826628282900955]
We show that targeted attacks on black-box NMT systems are feasible, based on poisoning a small fraction of their parallel training data.
We show that this attack can be realised practically via targeted corruption of web documents crawled to form the system's training data.
Our results are alarming: even on the state-of-the-art systems trained with massive parallel data, the attacks are still successful (over 50% success rate) under surprisingly low poisoning budgets.
arXiv Detail & Related papers (2020-11-02T01:52:46Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z) - Weight Poisoning Attacks on Pre-trained Models [103.19413805873585]
We show that it is possible to construct weight poisoning'' attacks where pre-trained weights are injected with vulnerabilities that expose backdoors'' after fine-tuning.
Our experiments on sentiment classification, toxicity detection, and spam detection show that this attack is widely applicable and poses a serious threat.
arXiv Detail & Related papers (2020-04-14T16:51:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.