From ML to LLM: Evaluating the Robustness of Phishing Webpage Detection Models against Adversarial Attacks
- URL: http://arxiv.org/abs/2407.20361v3
- Date: Sat, 15 Mar 2025 11:39:42 GMT
- Title: From ML to LLM: Evaluating the Robustness of Phishing Webpage Detection Models against Adversarial Attacks
- Authors: Aditya Kulkarni, Vivek Balachandran, Dinil Mon Divakaran, Tamal Das,
- Abstract summary: Phishing attacks attempt to deceive users into stealing sensitive information.<n>Current detection solutions remain vulnerable to adversarial attacks.<n>We develop a tool that generates adversarial phishing webpages by embedding diverse phishing features into legitimate webpages.
- Score: 0.8050163120218178
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Phishing attacks attempt to deceive users into stealing sensitive information, posing a significant cybersecurity threat. Advances in machine learning (ML) and deep learning (DL) have led to the development of numerous phishing webpage detection solutions, but these models remain vulnerable to adversarial attacks. Evaluating their robustness against adversarial phishing webpages is essential. Existing tools contain datasets of pre-designed phishing webpages for a limited number of brands, and lack diversity in phishing features. To address these challenges, we develop PhishOracle, a tool that generates adversarial phishing webpages by embedding diverse phishing features into legitimate webpages. We evaluate the robustness of three existing task-specific models -- Stack model, VisualPhishNet, and Phishpedia -- against PhishOracle-generated adversarial phishing webpages and observe a significant drop in their detection rates. In contrast, a multimodal large language model (MLLM)-based phishing detector demonstrates stronger robustness against these adversarial attacks but still is prone to evasion. Our findings highlight the vulnerability of phishing detection models to adversarial attacks, emphasizing the need for more robust detection approaches. Furthermore, we conduct a user study to evaluate whether PhishOracle-generated adversarial phishing webpages can deceive users. The results show that many of these phishing webpages evade not only existing detection models but also users. We also develop the PhishOracle web app, allowing users to input a legitimate URL, select relevant phishing features and generate a corresponding phishing webpage. All resources will be made publicly available on GitHub.
Related papers
- Web Phishing Net (WPN): A scalable machine learning approach for real-time phishing campaign detection [0.0]
Phishing is the most prevalent type of cyber-attack today and is recognized as the leading source of data breaches.
In this paper, we propose an unsupervised learning approach that is fast but scalable.
It is able to detect entire campaigns at a time with a high detection rate while preserving user privacy.
arXiv Detail & Related papers (2025-02-17T15:06:56Z) - Adapting to Cyber Threats: A Phishing Evolution Network (PEN) Framework for Phishing Generation and Analyzing Evolution Patterns using Large Language Models [10.58220151364159]
Phishing remains a pervasive cyber threat, as attackers craft deceptive emails to lure victims into revealing sensitive information.
While Artificial Intelligence (AI) has become a key component in defending against phishing attacks, these approaches face critical limitations.
We propose the Phishing Evolution Network (PEN), a framework leveraging large language models (LLMs) and adversarial training mechanisms to continuously generate high quality and realistic diverse phishing samples.
arXiv Detail & Related papers (2024-11-18T09:03:51Z) - MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - NoPhish: Efficient Chrome Extension for Phishing Detection Using Machine Learning Techniques [0.0]
"NoPhish" shall identify a phishing webpage based on several Machine Learning techniques.
We have used the training dataset from "PhishTank" and extracted the 22 most popular features.
The performance results show that Random Forest delivers the best precision.
arXiv Detail & Related papers (2024-09-01T18:59:14Z) - BaThe: Defense against the Jailbreak Attack in Multimodal Large Language Models by Treating Harmful Instruction as Backdoor Trigger [67.75420257197186]
In this work, we propose $textbfBaThe, a simple yet effective jailbreak defense mechanism.
Jailbreak backdoor attack uses harmful instructions combined with manually crafted strings as triggers to make the backdoored model generate prohibited responses.
We assume that harmful instructions can function as triggers, and if we alternatively set rejection responses as the triggered response, the backdoored model then can defend against jailbreak attacks.
arXiv Detail & Related papers (2024-08-17T04:43:26Z) - TrojFM: Resource-efficient Backdoor Attacks against Very Large Foundation Models [69.37990698561299]
TrojFM is a novel backdoor attack tailored for very large foundation models.
Our approach injects backdoors by fine-tuning only a very small proportion of model parameters.
We demonstrate that TrojFM can launch effective backdoor attacks against widely used large GPT-style models.
arXiv Detail & Related papers (2024-05-27T03:10:57Z) - EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding Inspection [53.25863925815954]
Federated self-supervised learning (FSSL) has emerged as a promising paradigm that enables the exploitation of clients' vast amounts of unlabeled data.
While FSSL offers advantages, its susceptibility to backdoor attacks has not been investigated.
We propose the Embedding Inspector (EmInspector) that detects malicious clients by inspecting the embedding space of local models.
arXiv Detail & Related papers (2024-05-21T06:14:49Z) - "Are Adversarial Phishing Webpages a Threat in Reality?" Understanding the Users' Perception of Adversarial Webpages [21.474375992224633]
Machine learning based phishing website detectors (ML-PWD) are a critical part of today's anti-phishing solutions in operation.
We show that adversarial phishing is a threat to both users and ML-PWD.
We also show that users' self-reported frequency of visiting a brand's website has a statistically negative correlation with their phishing detection accuracy.
arXiv Detail & Related papers (2024-04-03T16:10:17Z) - Mitigating Bias in Machine Learning Models for Phishing Webpage Detection [0.8050163120218178]
Phishing, a well-known cyberattack, revolves around the creation of phishing webpages and the dissemination of corresponding URLs.
Various techniques are available for preemptively categorizing zero-day phishing URLs by distilling unique attributes and constructing predictive models.
This proposal delves into persistent challenges within phishing detection solutions, particularly concentrated on the preliminary phase of assembling comprehensive datasets.
We propose a potential solution in the form of a tool engineered to alleviate bias in ML models.
arXiv Detail & Related papers (2024-01-16T13:45:54Z) - "Do Users fall for Real Adversarial Phishing?" Investigating the Human response to Evasive Webpages [7.779975012737389]
State-of-the-art solutions entail the application of machine learning to detect phishing websites by checking if they visually resemble webpages of well-known brands.
Some security companies began to deploy them also in their phishing detection systems (PDS)
In this paper, we scrutinize whether 'genuine phishing websites' that evade 'commercial ML-based PDS' represent a problem "in reality"
arXiv Detail & Related papers (2023-11-28T00:08:48Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - Detecting Phishing Sites -- An Overview [0.0]
Phishing is one of the most severe cyber-attacks where researchers are interested to find a solution.
To minimize the damage caused by phishing must be detected as early as possible.
There are various phishing detection techniques based on white-list, black-list, content-based, URL-based, visual-similarity and machine-learning.
arXiv Detail & Related papers (2021-03-23T19:16:03Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Robust and Verifiable Information Embedding Attacks to Deep Neural
Networks via Error-Correcting Codes [81.85509264573948]
In the era of deep learning, a user often leverages a third-party machine learning tool to train a deep neural network (DNN) classifier.
In an information embedding attack, an attacker is the provider of a malicious third-party machine learning tool.
In this work, we aim to design information embedding attacks that are verifiable and robust against popular post-processing methods.
arXiv Detail & Related papers (2020-10-26T17:42:42Z) - Phishing and Spear Phishing: examples in Cyber Espionage and techniques
to protect against them [91.3755431537592]
Phishing attacks have become the most used technique in the online scams, initiating more than 91% of cyberattacks, from 2012 onwards.
This study reviews how Phishing and Spear Phishing attacks are carried out by the phishers, through 5 steps which magnify the outcome.
arXiv Detail & Related papers (2020-05-31T18:10:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.