AntiPhishStack: LSTM-based Stacked Generalization Model for Optimized
Phishing URL Detection
- URL: http://arxiv.org/abs/2401.08947v2
- Date: Sun, 21 Jan 2024 09:05:33 GMT
- Title: AntiPhishStack: LSTM-based Stacked Generalization Model for Optimized
Phishing URL Detection
- Authors: Saba Aslam, Hafsa Aslam, Arslan Manzoor, Chen Hui, Abdur Rasool
- Abstract summary: This paper introduces a two-phase stack generalized model named AntiPhishStack, designed to detect phishing sites.
The model leverages the learning of URLs and character-level TF-IDF features symmetrically, enhancing its ability to combat emerging phishing threats.
Experimental validation on two benchmark datasets, comprising benign and phishing or malicious URLs, demonstrates the model's exceptional performance, achieving a notable 96.04% accuracy compared to existing studies.
- Score: 0.32141666878560626
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The escalating reliance on revolutionary online web services has introduced
heightened security risks, with persistent challenges posed by phishing despite
extensive security measures. Traditional phishing systems, reliant on machine
learning and manual features, struggle with evolving tactics. Recent advances
in deep learning offer promising avenues for tackling novel phishing challenges
and malicious URLs. This paper introduces a two-phase stack generalized model
named AntiPhishStack, designed to detect phishing sites. The model leverages
the learning of URLs and character-level TF-IDF features symmetrically,
enhancing its ability to combat emerging phishing threats. In Phase I, features
are trained on a base machine learning classifier, employing K-fold
cross-validation for robust mean prediction. Phase II employs a two-layered
stacked-based LSTM network with five adaptive optimizers for dynamic
compilation, ensuring premier prediction on these features. Additionally, the
symmetrical predictions from both phases are optimized and integrated to train
a meta-XGBoost classifier, contributing to a final robust prediction. The
significance of this work lies in advancing phishing detection with
AntiPhishStack, operating without prior phishing-specific feature knowledge.
Experimental validation on two benchmark datasets, comprising benign and
phishing or malicious URLs, demonstrates the model's exceptional performance,
achieving a notable 96.04% accuracy compared to existing studies. This research
adds value to the ongoing discourse on symmetry and asymmetry in information
security and provides a forward-thinking solution for enhancing network
security in the face of evolving cyber threats.
Related papers
- Adapting to Cyber Threats: A Phishing Evolution Network (PEN) Framework for Phishing Generation and Analyzing Evolution Patterns using Large Language Models [10.58220151364159]
Phishing remains a pervasive cyber threat, as attackers craft deceptive emails to lure victims into revealing sensitive information.
While Artificial Intelligence (AI) has become a key component in defending against phishing attacks, these approaches face critical limitations.
We propose the Phishing Evolution Network (PEN), a framework leveraging large language models (LLMs) and adversarial training mechanisms to continuously generate high quality and realistic diverse phishing samples.
arXiv Detail & Related papers (2024-11-18T09:03:51Z) - PhishGuard: A Multi-Layered Ensemble Model for Optimal Phishing Website Detection [0.0]
Phishing attacks are a growing cybersecurity threat, leveraging deceptive techniques to steal sensitive information through malicious websites.
This paper introduces PhishGuard, an optimal custom ensemble model designed to improve phishing site detection.
The model combines multiple machine learning classifiers, including Random Forest, Gradient Boosting, CatBoost, and XGBoost, to enhance detection accuracy.
arXiv Detail & Related papers (2024-09-29T23:15:57Z) - Beyond Detection: Leveraging Large Language Models for Cyber Attack Prediction in IoT Networks [4.836070911511429]
This paper proposes a novel network intrusion prediction framework that combines Large Language Models (LLMs) with Long Short Term Memory (LSTM) networks.
Our framework, evaluated on the CICIoT2023 IoT attack dataset, demonstrates a significant improvement in predictive capabilities, achieving an overall accuracy of 98%.
arXiv Detail & Related papers (2024-08-26T06:57:22Z) - Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture [58.60915132222421]
We introduce an approach that is both general and parameter-efficient for face forgery detection.
We design a forgery-style mixture formulation that augments the diversity of forgery source domains.
We show that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters.
arXiv Detail & Related papers (2024-08-23T01:53:36Z) - Watch the Watcher! Backdoor Attacks on Security-Enhancing Diffusion Models [65.30406788716104]
This work investigates the vulnerabilities of security-enhancing diffusion models.
We demonstrate that these models are highly susceptible to DIFF2, a simple yet effective backdoor attack.
Case studies show that DIFF2 can significantly reduce both post-purification and certified accuracy across benchmark datasets and models.
arXiv Detail & Related papers (2024-06-14T02:39:43Z) - The Performance of Sequential Deep Learning Models in Detecting Phishing Websites Using Contextual Features of URLs [0.0]
This study focuses on the detection of phishing websites using deep learning models such as Multi-Head Attention, Temporal Convolutional Network (TCN), BI-LSTM, and LSTM.
Results demonstrate that Multi-Head Attention and BI-LSTM model outperform some other deep learning-based algorithms such as TCN and LSTM in producing better precision, recall, and F1-scores.
arXiv Detail & Related papers (2024-04-15T13:58:22Z) - Deep Learning-Based Speech and Vision Synthesis to Improve Phishing
Attack Detection through a Multi-layer Adaptive Framework [1.3353802999735709]
Current anti-phishing methods remain vulnerable to complex phishing because of the increasingly sophistication tactics adopted by attacker.
In this research, we proposed a framework that combines Deep learning and Randon Forest to read images, synthesize speech from deep-fake videos, and natural language processing.
arXiv Detail & Related papers (2024-02-27T06:47:52Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Deep convolutional forest: a dynamic deep ensemble approach for spam
detection in text [219.15486286590016]
This paper introduces a dynamic deep ensemble model for spam detection that adjusts its complexity and extracts features automatically.
As a result, the model achieved high precision, recall, f1-score and accuracy of 98.38%.
arXiv Detail & Related papers (2021-10-10T17:19:37Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.