An Adversarial Attack Analysis on Malicious Advertisement URL Detection
Framework
- URL: http://arxiv.org/abs/2204.13172v1
- Date: Wed, 27 Apr 2022 20:06:22 GMT
- Title: An Adversarial Attack Analysis on Malicious Advertisement URL Detection
Framework
- Authors: Ehsan Nowroozi, Abhishek, Mohammadreza Mohammadi, Mauro Conti
- Abstract summary: Malicious advertisement URLs pose a security risk since they are the source of cyber-attacks.
Existing malicious URL detection techniques are limited and to handle unseen features as well as generalize to test data.
In this study, we extract a novel set of lexical and web-scrapped features and employ machine learning technique to set up system for fraudulent advertisement URLs detection.
- Score: 22.259444589459513
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Malicious advertisement URLs pose a security risk since they are the source
of cyber-attacks, and the need to address this issue is growing in both
industry and academia. Generally, the attacker delivers an attack vector to the
user by means of an email, an advertisement link or any other means of
communication and directs them to a malicious website to steal sensitive
information and to defraud them. Existing malicious URL detection techniques
are limited and to handle unseen features as well as generalize to test data.
In this study, we extract a novel set of lexical and web-scrapped features and
employ machine learning technique to set up system for fraudulent advertisement
URLs detection. The combination set of six different kinds of features
precisely overcome the obfuscation in fraudulent URL classification. Based on
different statistical properties, we use twelve different formatted datasets
for detection, prediction and classification task. We extend our prediction
analysis for mismatched and unlabelled datasets. For this framework, we analyze
the performance of four machine learning techniques: Random Forest, Gradient
Boost, XGBoost and AdaBoost in the detection part. With our proposed method, we
can achieve a false negative rate as low as 0.0037 while maintaining high
accuracy of 99.63%. Moreover, we devise a novel unsupervised technique for data
clustering using K- Means algorithm for the visual analysis. This paper
analyses the vulnerability of decision tree-based models using the limited
knowledge attack scenario. We considered the exploratory attack and implemented
Zeroth Order Optimization adversarial attack on the detection models.
Related papers
- AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - Detection of Malicious Websites Using Machine Learning Techniques [0.0]
K-Nearest Neighbor is the only model that performs consistently high across datasets.
Other models such as Random Forest, Decision Trees, Logistic Regression, and Support Vector Machines also consistently outperform a baseline model of predicting every link as malicious.
arXiv Detail & Related papers (2022-09-13T13:48:31Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - A Heterogeneous Graph Learning Model for Cyber-Attack Detection [4.559898668629277]
A cyber-attack is a malicious attempt by hackers to breach the target information system.
This paper proposes an intelligent cyber-attack detection method based on provenance data.
Experiment results show that the proposed method outperforms other learning based detection models.
arXiv Detail & Related papers (2021-12-16T16:03:39Z) - Zero-shot learning approach to adaptive Cybersecurity using Explainable
AI [0.5076419064097734]
We present a novel approach to handle the alarm flooding problem faced by Cybersecurity systems like security information and event management (SIEM) and intrusion detection (IDS)
We apply a zero-shot learning method to machine learning (ML) by leveraging explanations for predictions of anomalies generated by a ML model.
In this approach, without any prior knowledge of attack, we try to identify it, decipher the features that contribute to classification and try to bucketize the attack in a specific category.
arXiv Detail & Related papers (2021-06-21T06:29:13Z) - ExAD: An Ensemble Approach for Explanation-based Adversarial Detection [17.455233006559734]
We propose ExAD, a framework to detect adversarial examples using an ensemble of explanation techniques.
We evaluate our approach using six state-of-the-art adversarial attacks on three image datasets.
arXiv Detail & Related papers (2021-03-22T00:53:07Z) - Online Adversarial Attacks [57.448101834579624]
We formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases.
We first rigorously analyze a deterministic variant of the online threat model.
We then propose algoname, a simple yet practical algorithm yielding a provably better competitive ratio for $k=2$ over the current best single threshold algorithm.
arXiv Detail & Related papers (2021-03-02T20:36:04Z) - Detection of Adversarial Supports in Few-shot Classifiers Using Feature
Preserving Autoencoders and Self-Similarity [89.26308254637702]
We propose a detection strategy to highlight adversarial support sets.
We make use of feature preserving autoencoder filtering and also the concept of self-similarity of a support set to perform this detection.
Our method is attack-agnostic and also the first to explore detection for few-shot classifiers to the best of our knowledge.
arXiv Detail & Related papers (2020-12-09T14:13:41Z) - Being Single Has Benefits. Instance Poisoning to Deceive Malware
Classifiers [47.828297621738265]
We show how an attacker can launch a sophisticated and efficient poisoning attack targeting the dataset used to train a malware classifier.
As opposed to other poisoning attacks in the malware detection domain, our attack does not focus on malware families but rather on specific malware instances that contain an implanted trigger.
We propose a comprehensive detection approach that could serve as a future sophisticated defense against this newly discovered severe threat.
arXiv Detail & Related papers (2020-10-30T15:27:44Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z) - Phishing URL Detection Through Top-level Domain Analysis: A Descriptive
Approach [3.494620587853103]
This study aims to develop a machine-learning model to detect fraudulent URLs which can be used within the Splunk platform.
Inspired from similar approaches in the literature, we trained the SVM and Random Forests algorithms using malicious and benign datasets.
We evaluated the algorithms' performance with precision and recall, reaching up to 85% precision and 87% recall in the case of Random Forests.
arXiv Detail & Related papers (2020-05-13T21:41:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.