A framework for comprehensible multi-modal detection of cyber threats
- URL: http://arxiv.org/abs/2111.05764v1
- Date: Wed, 10 Nov 2021 16:09:52 GMT
- Title: A framework for comprehensible multi-modal detection of cyber threats
- Authors: Jan Kohout, \v{C}en\v{e}k \v{S}karda, Kyrylo Shcherbin, Martin Kopp,
Jan Brabec
- Abstract summary: Detection of malicious activities in corporate environments is a very complex task and much effort has been invested into research of its automation.
We discuss these limitations and design a detection framework which combines observed events from different sources of data.
We demonstrate applicability of the framework on a case study of a real malware infection observed in a corporate network.
- Score: 3.4018740224268567
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detection of malicious activities in corporate environments is a very complex
task and much effort has been invested into research of its automation.
However, vast majority of existing methods operate only in a narrow scope which
limits them to capture only fragments of the evidence of malware's presence.
Consequently, such approach is not aligned with the way how the cyber threats
are studied and described by domain experts. In this work, we discuss these
limitations and design a detection framework which combines observed events
from different sources of data. Thanks to this, it provides full insight into
the attack life cycle and enables detection of threats that require this
coupling of observations from different telemetries to identify the full scope
of the incident. We demonstrate applicability of the framework on a case study
of a real malware infection observed in a corporate network.
Related papers
- Detecting Adversarial Attacks in Semantic Segmentation via Uncertainty Estimation: A Deep Analysis [12.133306321357999]
We propose an uncertainty-based method for detecting adversarial attacks on neural networks for semantic segmentation.
We conduct a detailed analysis of uncertainty-based detection of adversarial attacks and various state-of-the-art neural networks.
Our numerical experiments show the effectiveness of the proposed uncertainty-based detection method.
arXiv Detail & Related papers (2024-08-19T14:13:30Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - DARE: Towards Robust Text Explanations in Biomedical and Healthcare
Applications [54.93807822347193]
We show how to adapt attribution robustness estimation methods to a given domain, so as to take into account domain-specific plausibility.
Next, we provide two methods, adversarial training and FAR training, to mitigate the brittleness characterized by DARE.
Finally, we empirically validate our methods with extensive experiments on three established biomedical benchmarks.
arXiv Detail & Related papers (2023-07-05T08:11:40Z) - Uncertainty-based Detection of Adversarial Attacks in Semantic
Segmentation [16.109860499330562]
We introduce an uncertainty-based approach for the detection of adversarial attacks in semantic segmentation.
We demonstrate the ability of our approach to detect perturbed images across multiple types of adversarial attacks.
arXiv Detail & Related papers (2023-05-22T08:36:35Z) - Adversarial Machine Learning In Network Intrusion Detection Domain: A
Systematic Review [0.0]
It has been found that deep learning models are vulnerable to data instances that can mislead the model to make incorrect classification decisions.
This survey explores the researches that employ different aspects of adversarial machine learning in the area of network intrusion detection.
arXiv Detail & Related papers (2021-12-06T19:10:23Z) - TESDA: Transform Enabled Statistical Detection of Attacks in Deep Neural
Networks [0.0]
We present TESDA, a low-overhead, flexible, and statistically grounded method for online detection of attacks.
Unlike most prior work, we require neither dedicated hardware to run in real-time, nor the presence of a Trojan trigger to detect discrepancies in behavior.
We empirically establish our method's usefulness and practicality across multiple architectures, datasets and diverse attacks.
arXiv Detail & Related papers (2021-10-16T02:10:36Z) - Exploiting Multi-Object Relationships for Detecting Adversarial Attacks
in Complex Scenes [51.65308857232767]
Vision systems that deploy Deep Neural Networks (DNNs) are known to be vulnerable to adversarial examples.
Recent research has shown that checking the intrinsic consistencies in the input data is a promising way to detect adversarial attacks.
We develop a novel approach to perform context consistency checks using language models.
arXiv Detail & Related papers (2021-08-19T00:52:10Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Understanding Object Detection Through An Adversarial Lens [14.976840260248913]
This paper presents a framework for analyzing and evaluating vulnerabilities of deep object detectors under an adversarial lens.
We demonstrate that the proposed framework can serve as a methodical benchmark for analyzing adversarial behaviors and risks in real-time object detection systems.
We conjecture that this framework can also serve as a tool to assess the security risks and the adversarial robustness of deep object detectors to be deployed in real-world applications.
arXiv Detail & Related papers (2020-07-11T18:41:47Z) - Survey of Network Intrusion Detection Methods from the Perspective of
the Knowledge Discovery in Databases Process [63.75363908696257]
We review the methods that have been applied to network data with the purpose of developing an intrusion detector.
We discuss the techniques used for the capture, preparation and transformation of the data, as well as, the data mining and evaluation methods.
As a result of this literature review, we investigate some open issues which will need to be considered for further research in the area of network security.
arXiv Detail & Related papers (2020-01-27T11:21:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.