Detecting Cloud-Based Phishing Attacks by Combining Deep Learning Models
- URL: http://arxiv.org/abs/2204.02446v1
- Date: Tue, 5 Apr 2022 18:47:57 GMT
- Title: Detecting Cloud-Based Phishing Attacks by Combining Deep Learning Models
- Authors: Medha Atre, Birendra Jha, Ashwini Rao
- Abstract summary: Web-based phishing attacks nowadays exploit popular cloud web hosting services and apps such as Google Sites and Typeform for hosting their attacks.
Here we investigate the effectiveness of deep learning models in detecting this class of cloud-based phishing attacks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Web-based phishing attacks nowadays exploit popular cloud web hosting
services and apps such as Google Sites and Typeform for hosting their attacks.
Since these attacks originate from reputable domains and IP addresses of the
cloud services, traditional phishing detection methods such as IP reputation
monitoring and blacklisting are not very effective. Here we investigate the
effectiveness of deep learning models in detecting this class of cloud-based
phishing attacks. Specifically, we evaluate deep learning models for three
phishing detection methods--LSTM model for URL analysis, YOLOv2 model for logo
analysis, and triplet network model for visual similarity analysis. We train
the models using well-known datasets and test their performance on phishing
attacks in the wild. Our results qualitatively explain why the models succeed
or fail. Furthermore, our results highlight how combining results from the
individual models can improve the effectiveness of detecting cloud-based
phishing attacks.
Related papers
- PhishGuard: A Multi-Layered Ensemble Model for Optimal Phishing Website Detection [0.0]
Phishing attacks are a growing cybersecurity threat, leveraging deceptive techniques to steal sensitive information through malicious websites.
This paper introduces PhishGuard, an optimal custom ensemble model designed to improve phishing site detection.
The model combines multiple machine learning classifiers, including Random Forest, Gradient Boosting, CatBoost, and XGBoost, to enhance detection accuracy.
arXiv Detail & Related papers (2024-09-29T23:15:57Z) - From ML to LLM: Evaluating the Robustness of Phishing Webpage Detection Models against Adversarial Attacks [0.8050163120218178]
Phishing attacks attempt to deceive users into stealing sensitive information.
Current phishing webpage detection solutions are vulnerable to adversarial attacks.
We develop a tool that generates adversarial phishing webpages by embedding diverse phishing features into legitimate webpages.
arXiv Detail & Related papers (2024-07-29T18:21:34Z) - Evaluating the Effectiveness and Robustness of Visual Similarity-based Phishing Detection Models [10.334870703744498]
We comprehensively scrutinize and evaluate state-of-the-art visual similarity-based anti-phishing models.
Our analysis reveals that while certain models maintain high accuracy, others exhibit notably lower performance than results on curated datasets.
To the best of our knowledge, this work represents the first large-scale, systematic evaluation of visual similarity-based models for phishing detection in real-world settings.
arXiv Detail & Related papers (2024-05-30T01:28:36Z) - SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models [74.58014281829946]
We analyze the effectiveness of several representative attacks/defenses, including model stealing attacks, membership inference attacks, and backdoor detection on public models.
Our evaluation empirically shows the performance of these attacks/defenses can vary significantly on public models compared to self-trained models.
arXiv Detail & Related papers (2023-10-19T11:49:22Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Backdoor Attacks on Crowd Counting [63.90533357815404]
Crowd counting is a regression task that estimates the number of people in a scene image.
In this paper, we investigate the vulnerability of deep learning based crowd counting models to backdoor attacks.
arXiv Detail & Related papers (2022-07-12T16:17:01Z) - DeepSight: Mitigating Backdoor Attacks in Federated Learning Through
Deep Model Inspection [26.593268413299228]
Federated Learning (FL) allows multiple clients to collaboratively train a Neural Network (NN) model on their private data without revealing the data.
DeepSight is a novel model filtering approach for mitigating backdoor attacks.
We show that it can mitigate state-of-the-art backdoor attacks with a negligible impact on the model's performance on benign data.
arXiv Detail & Related papers (2022-01-03T17:10:07Z) - Detecting Phishing Sites -- An Overview [0.0]
Phishing is one of the most severe cyber-attacks where researchers are interested to find a solution.
To minimize the damage caused by phishing must be detected as early as possible.
There are various phishing detection techniques based on white-list, black-list, content-based, URL-based, visual-similarity and machine-learning.
arXiv Detail & Related papers (2021-03-23T19:16:03Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z) - Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised
Learning [71.17774313301753]
We explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks.
Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples.
arXiv Detail & Related papers (2020-06-05T03:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.