A Comparative Study of Watering Hole Attack Detection Using Supervised Neural Network
- URL: http://arxiv.org/abs/2311.15024v3
- Date: Mon, 12 Feb 2024 20:18:24 GMT
- Title: A Comparative Study of Watering Hole Attack Detection Using Supervised Neural Network
- Authors: Mst. Nishita Aktar, Sornali Akter, Md. Nusaim Islam Saad, Jakir Hosen Jisun, Kh. Mustafizur Rahman, Md. Nazmus Sakib,
- Abstract summary: This study explores the nefarious tactic known as "watering hole attacks using supervised neural networks to detect and prevent these attacks.
The neural network identifies patterns in website behavior and network traffic associated with such attacks.
In terms of prevention, the model successfully stops 95% of attacks, providing robust user protection.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The state of security demands innovative solutions to defend against targeted attacks due to the growing sophistication of cyber threats. This study explores the nefarious tactic known as "watering hole attacks using supervised neural networks to detect and prevent these attacks. The neural network identifies patterns in website behavior and network traffic associated with such attacks. Testing on a dataset of confirmed attacks shows a 99% detection rate with a mere 0.1% false positive rate, demonstrating the model's effectiveness. In terms of prevention, the model successfully stops 95% of attacks, providing robust user protection. The study also suggests mitigation strategies, including web filtering solutions, user education, and security controls. Overall, this research presents a promising solution for countering watering hole attacks, offering strong detection, prevention, and mitigation strategies.
Related papers
- Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats [52.94388672185062]
We propose an efficient defense mechanism against backdoor threats using a concept known as machine unlearning.
This entails strategically creating a small set of poisoned samples to aid the model's rapid unlearning of backdoor vulnerabilities.
In the backdoor unlearning process, we present a novel token-based portion unlearning training regime.
arXiv Detail & Related papers (2024-09-29T02:55:38Z) - EaTVul: ChatGPT-based Evasion Attack Against Software Vulnerability Detection [19.885698402507145]
Adversarial examples can exploit vulnerabilities within deep neural networks.
This study showcases the susceptibility of deep learning models to adversarial attacks, which can achieve 100% attack success rate.
arXiv Detail & Related papers (2024-07-27T09:04:54Z) - Leveraging Reinforcement Learning in Red Teaming for Advanced Ransomware Attack Simulations [7.361316528368866]
This paper proposes a novel approach utilizing reinforcement learning (RL) to simulate ransomware attacks.
By training an RL agent in a simulated environment mirroring real-world networks, effective attack strategies can be learned quickly.
Experimental results on a 152-host example network confirm the effectiveness of the proposed approach.
arXiv Detail & Related papers (2024-06-25T14:16:40Z) - SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks [53.28390057407576]
Modern NLP models are often trained on public datasets drawn from diverse sources.
Data poisoning attacks can manipulate the model's behavior in ways engineered by the attacker.
Several strategies have been proposed to mitigate the risks associated with backdoor attacks.
arXiv Detail & Related papers (2024-05-19T14:50:09Z) - Mitigating Label Flipping Attacks in Malicious URL Detectors Using
Ensemble Trees [16.16333915007336]
Malicious URLs provide adversarial opportunities across various industries, including transportation, healthcare, energy, and banking.
backdoor attacks involve manipulating a small percentage of training data labels, such as Label Flipping (LF), which changes benign labels to malicious ones and vice versa.
We propose an innovative alarm system that detects the presence of poisoned labels and a defense mechanism designed to uncover the original class labels.
arXiv Detail & Related papers (2024-03-05T14:21:57Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Early Detection of Network Attacks Using Deep Learning [0.0]
A network intrusion detection system (IDS) is a tool used for identifying unauthorized and malicious behavior by observing the network traffic.
We propose an end-to-end early intrusion detection system to prevent network attacks before they could cause any more damage to the system under attack.
arXiv Detail & Related papers (2022-01-27T16:35:37Z) - Certifiers Make Neural Networks Vulnerable to Availability Attacks [70.69104148250614]
We show for the first time that fallback strategies can be deliberately triggered by an adversary.
In addition to naturally occurring abstains for some inputs and perturbations, the adversary can use training-time attacks to deliberately trigger the fallback.
We design two novel availability attacks, which show the practical relevance of these threats.
arXiv Detail & Related papers (2021-08-25T15:49:10Z) - TANTRA: Timing-Based Adversarial Network Traffic Reshaping Attack [46.79557381882643]
We present TANTRA, a novel end-to-end Timing-based Adversarial Network Traffic Reshaping Attack.
Our evasion attack utilizes a long short-term memory (LSTM) deep neural network (DNN) which is trained to learn the time differences between the target network's benign packets.
TANTRA achieves an average success rate of 99.99% in network intrusion detection system evasion.
arXiv Detail & Related papers (2021-03-10T19:03:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.