Challenges in Forecasting Malicious Events from Incomplete Data
- URL: http://arxiv.org/abs/2004.04597v1
- Date: Mon, 6 Apr 2020 22:57:23 GMT
- Title: Challenges in Forecasting Malicious Events from Incomplete Data
- Authors: Nazgol Tavabi, Andr\'es Abeliuk, Negar Mokhberian, Jeremy Abramson,
Kristina Lerman
- Abstract summary: Researchers have attempted to combine external data with machine learning algorithms to learn indicators of impending cyber-attacks.
But successful cyber-attacks represent a tiny fraction of all attempted attacks.
As we show in this paper, the process of filtering reduces the predictability of cyber-attacks.
- Score: 6.656003516101928
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to accurately predict cyber-attacks would enable organizations to
mitigate their growing threat and avert the financial losses and disruptions
they cause. But how predictable are cyber-attacks? Researchers have attempted
to combine external data -- ranging from vulnerability disclosures to
discussions on Twitter and the darkweb -- with machine learning algorithms to
learn indicators of impending cyber-attacks. However, successful cyber-attacks
represent a tiny fraction of all attempted attacks: the vast majority are
stopped, or filtered by the security appliances deployed at the target. As we
show in this paper, the process of filtering reduces the predictability of
cyber-attacks. The small number of attacks that do penetrate the target's
defenses follow a different generative process compared to the whole data which
is much harder to learn for predictive models. This could be caused by the fact
that the resulting time series also depends on the filtering process in
addition to all the different factors that the original time series depended
on. We empirically quantify the loss of predictability due to filtering using
real-world data from two organizations. Our work identifies the limits to
forecasting cyber-attacks from highly filtered data.
Related papers
- Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations [50.1394620328318]
Existing backdoor attacks mainly focus on balanced datasets.
We propose an effective backdoor attack named Dynamic Data Augmentation Operation (D$2$AO)
Our method can achieve the state-of-the-art attack performance while preserving the clean accuracy.
arXiv Detail & Related papers (2024-10-16T18:44:22Z) - algoXSSF: Detection and analysis of cross-site request forgery (XSRF) and cross-site scripting (XSS) attacks via Machine learning algorithms [5.592394503914489]
The combination of emerging new technology and powerful algorithms is needed to counter defense web security.
The easy identification of cyber trends and patterns with continuous improvement is possible within the edge of machine learning and AI algorithms.
We have developed the algorithm and cyber defense framework - algoXSSF with machine learning algorithms embedded to combat malicious attacks.
arXiv Detail & Related papers (2024-02-01T20:54:41Z) - Use of Graph Neural Networks in Aiding Defensive Cyber Operations [2.1874189959020427]
Graph Neural Networks have emerged as a promising approach for enhancing the effectiveness of defensive measures.
We look into the application of GNNs in aiding to break each stage of one of the most renowned attack life cycles, the Lockheed Martin Cyber Kill Chain.
arXiv Detail & Related papers (2024-01-11T05:56:29Z) - Graph Mining for Cybersecurity: A Survey [61.505995908021525]
The explosive growth of cyber attacks nowadays, such as malware, spam, and intrusions, caused severe consequences on society.
Traditional Machine Learning (ML) based methods are extensively used in detecting cyber threats, but they hardly model the correlations between real-world cyber entities.
With the proliferation of graph mining techniques, many researchers investigated these techniques for capturing correlations between cyber entities and achieving high performance.
arXiv Detail & Related papers (2023-04-02T08:43:03Z) - Efficient and Low Overhead Website Fingerprinting Attacks and Defenses
based on TCP/IP Traffic [16.6602652644935]
Website fingerprinting attacks based on machine learning and deep learning tend to use the most typical features to achieve a satisfactory performance of attacking rate.
To defend against such attacks, random packet defense (RPD) with a high cost of excessive network overhead is usually applied.
We propose a filter-assisted attack against RPD, which can filter out the injected noises using the statistical characteristics of TCP/IP traffic.
We further improve the list-based defense by a traffic splitting mechanism, which can combat the mentioned attacks as well as save a considerable amount of network overhead.
arXiv Detail & Related papers (2023-02-27T13:45:15Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Are socially-aware trajectory prediction models really socially-aware? [75.36961426916639]
We introduce a socially-attended attack to assess the social understanding of prediction models.
An attack is a small yet carefully-crafted perturbations to fail predictors.
We show that our attack can be employed to increase the social understanding of state-of-the-art models.
arXiv Detail & Related papers (2021-08-24T17:59:09Z) - Generating Cyber Threat Intelligence to Discover Potential Security
Threats Using Classification and Topic Modeling [6.0897744845912865]
Cyber Threat Intelligence (CTI) has been represented as one of the proactive and robust mechanisms.
Our goal is to identify and explore relevant CTI from hacker forums by using different supervised and unsupervised learning techniques.
arXiv Detail & Related papers (2021-08-16T02:30:29Z) - Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching [56.280018325419896]
Data Poisoning attacks modify training data to maliciously control a model trained on such data.
We analyze a particularly malicious poisoning attack that is both "from scratch" and "clean label"
We show that it is the first poisoning method to cause targeted misclassification in modern deep networks trained from scratch on a full-sized, poisoned ImageNet dataset.
arXiv Detail & Related papers (2020-09-04T16:17:54Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Subpopulation Data Poisoning Attacks [18.830579299974072]
Poisoning attacks against machine learning induce adversarial modification of data used by a machine learning algorithm to selectively change its output when it is deployed.
We introduce a novel data poisoning attack called a emphsubpopulation attack, which is particularly relevant when datasets are large and diverse.
We design a modular framework for subpopulation attacks, instantiate it with different building blocks, and show that the attacks are effective for a variety of datasets and machine learning models.
arXiv Detail & Related papers (2020-06-24T20:20:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.