Detecting Cloud-Based Phishing Attacks by Combining Deep Learning Models
- URL: http://arxiv.org/abs/2204.02446v1
- Date: Tue, 5 Apr 2022 18:47:57 GMT
- Title: Detecting Cloud-Based Phishing Attacks by Combining Deep Learning Models
- Authors: Medha Atre, Birendra Jha, Ashwini Rao
- Abstract summary: Web-based phishing attacks nowadays exploit popular cloud web hosting services and apps such as Google Sites and Typeform for hosting their attacks.
Here we investigate the effectiveness of deep learning models in detecting this class of cloud-based phishing attacks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Web-based phishing attacks nowadays exploit popular cloud web hosting
services and apps such as Google Sites and Typeform for hosting their attacks.
Since these attacks originate from reputable domains and IP addresses of the
cloud services, traditional phishing detection methods such as IP reputation
monitoring and blacklisting are not very effective. Here we investigate the
effectiveness of deep learning models in detecting this class of cloud-based
phishing attacks. Specifically, we evaluate deep learning models for three
phishing detection methods--LSTM model for URL analysis, YOLOv2 model for logo
analysis, and triplet network model for visual similarity analysis. We train
the models using well-known datasets and test their performance on phishing
attacks in the wild. Our results qualitatively explain why the models succeed
or fail. Furthermore, our results highlight how combining results from the
individual models can improve the effectiveness of detecting cloud-based
phishing attacks.
Related papers
- Web Phishing Net (WPN): A scalable machine learning approach for real-time phishing campaign detection [0.0]
Phishing is the most prevalent type of cyber-attack today and is recognized as the leading source of data breaches.
In this paper, we propose an unsupervised learning approach that is fast but scalable.
It is able to detect entire campaigns at a time with a high detection rate while preserving user privacy.
arXiv Detail & Related papers (2025-02-17T15:06:56Z) - Adapting to Cyber Threats: A Phishing Evolution Network (PEN) Framework for Phishing Generation and Analyzing Evolution Patterns using Large Language Models [10.58220151364159]
Phishing remains a pervasive cyber threat, as attackers craft deceptive emails to lure victims into revealing sensitive information.
While Artificial Intelligence (AI) has become a key component in defending against phishing attacks, these approaches face critical limitations.
We propose the Phishing Evolution Network (PEN), a framework leveraging large language models (LLMs) and adversarial training mechanisms to continuously generate high quality and realistic diverse phishing samples.
arXiv Detail & Related papers (2024-11-18T09:03:51Z) - PhishGuard: A Multi-Layered Ensemble Model for Optimal Phishing Website Detection [0.0]
Phishing attacks are a growing cybersecurity threat, leveraging deceptive techniques to steal sensitive information through malicious websites.
This paper introduces PhishGuard, an optimal custom ensemble model designed to improve phishing site detection.
The model combines multiple machine learning classifiers, including Random Forest, Gradient Boosting, CatBoost, and XGBoost, to enhance detection accuracy.
arXiv Detail & Related papers (2024-09-29T23:15:57Z) - Evaluating the Effectiveness and Robustness of Visual Similarity-based Phishing Detection Models [10.334870703744498]
Phishing attacks elaborately replicate the visual appearance of legitimate websites to deceive victims.
Visual similarity-based detection systems have emerged as an effective countermeasure, but their effectiveness and robustness in real-world scenarios have been underexplored.
We comprehensively scrutinize and evaluate the effectiveness and robustness of popular visual similarity-based anti-phishing models using a large-scale dataset of 451k real-world phishing websites.
arXiv Detail & Related papers (2024-05-30T01:28:36Z) - SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models [74.58014281829946]
We analyze the effectiveness of several representative attacks/defenses, including model stealing attacks, membership inference attacks, and backdoor detection on public models.
Our evaluation empirically shows the performance of these attacks/defenses can vary significantly on public models compared to self-trained models.
arXiv Detail & Related papers (2023-10-19T11:49:22Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Backdoor Attacks on Crowd Counting [63.90533357815404]
Crowd counting is a regression task that estimates the number of people in a scene image.
In this paper, we investigate the vulnerability of deep learning based crowd counting models to backdoor attacks.
arXiv Detail & Related papers (2022-07-12T16:17:01Z) - DeepSight: Mitigating Backdoor Attacks in Federated Learning Through
Deep Model Inspection [26.593268413299228]
Federated Learning (FL) allows multiple clients to collaboratively train a Neural Network (NN) model on their private data without revealing the data.
DeepSight is a novel model filtering approach for mitigating backdoor attacks.
We show that it can mitigate state-of-the-art backdoor attacks with a negligible impact on the model's performance on benign data.
arXiv Detail & Related papers (2022-01-03T17:10:07Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z) - Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised
Learning [71.17774313301753]
We explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks.
Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples.
arXiv Detail & Related papers (2020-06-05T03:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.