Insider Detection using Deep Autoencoder and Variational Autoencoder
Neural Networks
- URL: http://arxiv.org/abs/2109.02568v1
- Date: Mon, 6 Sep 2021 16:08:51 GMT
- Title: Insider Detection using Deep Autoencoder and Variational Autoencoder
Neural Networks
- Authors: Efthimios Pantelidis, Gueltoum Bendiab, Stavros Shiaeles, Nicholas
Kolokotronis
- Abstract summary: Insider attacks are one of the most challenging cybersecurity issues for companies, businesses and critical infrastructures.
In this paper, we aim to address this issue by using deep learning algorithms Autoencoder and Variational Autoencoder deep.
We will especially investigate the usefulness of applying these algorithms to automatically defend against potential internal threats, without human intervention.
- Score: 2.5234156040689237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Insider attacks are one of the most challenging cybersecurity issues for
companies, businesses and critical infrastructures. Despite the implemented
perimeter defences, the risk of this kind of attack is still very high. In
fact, the detection of insider attacks is a very complicated security task and
presents a serious challenge to the research community. In this paper, we aim
to address this issue by using deep learning algorithms Autoencoder and
Variational Autoencoder deep. We will especially investigate the usefulness of
applying these algorithms to automatically defend against potential internal
threats, without human intervention. The effectiveness of these two models is
evaluated on the public dataset CERT dataset (CERT r4.2). This version of the
CERT Insider Threat Test dataset includes both benign and malicious activities
generated from 1000 simulated users. The comparison results with other models
show that the Variational Autoencoder neural network provides the best overall
performance with a greater detection accuracy and a reasonable false positive
rate
Related papers
- SecCodePLT: A Unified Platform for Evaluating the Security of Code GenAI [47.11178028457252]
We develop SecCodePLT, a unified and comprehensive evaluation platform for code GenAIs' risks.
For insecure code, we introduce a new methodology for data creation that combines experts with automatic generation.
For cyberattack helpfulness, we construct samples to prompt a model to generate actual attacks, along with dynamic metrics in our environment.
arXiv Detail & Related papers (2024-10-14T21:17:22Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Vulnerability Detection Using Two-Stage Deep Learning Models [0.0]
Two deep learning models were proposed for vulnerability detection in C/C++ source codes.
The first stage is CNN which detects if the source code contains any vulnerability.
The second stage is CNN-LTSM that classifies this vulnerability into a class of 50 different types of vulnerabilities.
arXiv Detail & Related papers (2023-05-08T22:12:34Z) - Zero Day Threat Detection Using Metric Learning Autoencoders [3.1965908200266173]
The proliferation of zero-day threats (ZDTs) to companies' networks has been immensely costly.
Deep learning methods are an attractive option for their ability to capture highly-nonlinear behavior patterns.
The models presented here are also trained and evaluated with two more datasets, and continue to show promising results even when generalizing to new network topologies.
arXiv Detail & Related papers (2022-11-01T13:12:20Z) - Zero Day Threat Detection Using Graph and Flow Based Security Telemetry [3.3029515721630855]
Zero Day Threats (ZDT) are novel methods used by malicious actors to attack and exploit information technology (IT) networks or infrastructure.
In this paper, we introduce a deep learning based approach to Zero Day Threat detection that can generalize, scale, and effectively identify threats in near real-time.
arXiv Detail & Related papers (2022-05-04T19:30:48Z) - TESDA: Transform Enabled Statistical Detection of Attacks in Deep Neural
Networks [0.0]
We present TESDA, a low-overhead, flexible, and statistically grounded method for online detection of attacks.
Unlike most prior work, we require neither dedicated hardware to run in real-time, nor the presence of a Trojan trigger to detect discrepancies in behavior.
We empirically establish our method's usefulness and practicality across multiple architectures, datasets and diverse attacks.
arXiv Detail & Related papers (2021-10-16T02:10:36Z) - Anomaly Detection Based on Selection and Weighting in Latent Space [73.01328671569759]
We propose a novel selection-and-weighting-based anomaly detection framework called SWAD.
Experiments on both benchmark and real-world datasets have shown the effectiveness and superiority of SWAD.
arXiv Detail & Related papers (2021-03-08T10:56:38Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.