Detecting the Insider Threat with Long Short Term Memory (LSTM) Neural
Networks
- URL: http://arxiv.org/abs/2007.11956v1
- Date: Mon, 20 Jul 2020 23:29:01 GMT
- Title: Detecting the Insider Threat with Long Short Term Memory (LSTM) Neural
Networks
- Authors: Eduardo Lopez, Kamran Sartipi
- Abstract summary: In this study, we use deep learning, and most specifically Long Short Term Memory (LSTM) recurrent networks for enabling the detection of insider threats.
We demonstrate through a very large, anonymized dataset how LSTM uses the sequenced nature of the data for reducing the search space and making the work of a security analyst more effective.
- Score: 0.799536002595393
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Information systems enable many organizational processes in every industry.
The efficiencies and effectiveness in the use of information technologies
create an unintended byproduct: misuse by existing users or somebody
impersonating them - an insider threat. Detecting the insider threat may be
possible if thorough analysis of electronic logs, capturing user behaviors,
takes place. However, logs are usually very large and unstructured, posing
significant challenges for organizations. In this study, we use deep learning,
and most specifically Long Short Term Memory (LSTM) recurrent networks for
enabling the detection. We demonstrate through a very large, anonymized dataset
how LSTM uses the sequenced nature of the data for reducing the search space
and making the work of a security analyst more effective.
Related papers
- KiNETGAN: Enabling Distributed Network Intrusion Detection through Knowledge-Infused Synthetic Data Generation [0.0]
We propose a knowledge-infused Generative Adversarial Network for generating synthetic network activity data (KiNETGAN)
Our approach enhances the resilience of distributed intrusion detection while addressing privacy concerns.
arXiv Detail & Related papers (2024-05-26T08:02:02Z) - Intrusion Detection in Internet of Things using Convolutional Neural
Networks [4.718295605140562]
We propose a novel solution to the intrusion attacks against IoT devices using CNNs.
The data is encoded as the convolutional operations to capture the patterns from the sensors data along time.
The experimental results show significant improvement in both true positive rate and false positive rate compared to the baseline using LSTM.
arXiv Detail & Related papers (2022-11-18T07:27:07Z) - Big data analysis and distributed deep learning for next-generation
intrusion detection system optimization [0.0]
This paper proposes a solution to detect new threats with higher detection rate and lower false positive than already used IDS.
We achieve those results by using Networking, a deep recurrent neural network: Long Short Term Memory (LSTM) on top of Apache Spark Framework.
We propose a model that describes the network abstract normal behavior from a sequence of millions of packets within their context and analyzes them in near real-time to detect point, collective and contextual anomalies.
arXiv Detail & Related papers (2022-09-28T09:46:16Z) - Zero Day Threat Detection Using Graph and Flow Based Security Telemetry [3.3029515721630855]
Zero Day Threats (ZDT) are novel methods used by malicious actors to attack and exploit information technology (IT) networks or infrastructure.
In this paper, we introduce a deep learning based approach to Zero Day Threat detection that can generalize, scale, and effectively identify threats in near real-time.
arXiv Detail & Related papers (2022-05-04T19:30:48Z) - LogLAB: Attention-Based Labeling of Log Data Anomalies via Weak
Supervision [63.08516384181491]
We present LogLAB, a novel modeling approach for automated labeling of log messages without requiring manual work by experts.
Our method relies on estimated failure time windows provided by monitoring systems to produce precise labeled datasets in retrospect.
Our evaluation shows that LogLAB consistently outperforms nine benchmark approaches across three different datasets and maintains an F1-score of more than 0.98 even at large failure time windows.
arXiv Detail & Related papers (2021-11-02T15:16:08Z) - Robust and Transferable Anomaly Detection in Log Data using Pre-Trained
Language Models [59.04636530383049]
Anomalies or failures in large computer systems, such as the cloud, have an impact on a large number of users.
We propose a framework for anomaly detection in log data, as a major troubleshooting source of system information.
arXiv Detail & Related papers (2021-02-23T09:17:05Z) - Experimental Review of Neural-based approaches for Network Intrusion
Management [8.727349339883094]
We provide an experimental-based review of neural-based methods applied to intrusion detection issues.
We offer a complete view of the most prominent neural-based techniques relevant to intrusion detection, including deep-based approaches or weightless neural networks.
Our evaluation quantifies the value of neural networks, particularly when state-of-the-art datasets are used to train the models.
arXiv Detail & Related papers (2020-09-18T18:32:24Z) - Object Tracking through Residual and Dense LSTMs [67.98948222599849]
Deep learning-based trackers based on LSTMs (Long Short-Term Memory) recurrent neural networks have emerged as a powerful alternative.
DenseLSTMs outperform Residual and regular LSTM, and offer a higher resilience to nuisances.
Our case study supports the adoption of residual-based RNNs for enhancing the robustness of other trackers.
arXiv Detail & Related papers (2020-06-22T08:20:17Z) - AutoOD: Automated Outlier Detection via Curiosity-guided Search and
Self-imitation Learning [72.99415402575886]
Outlier detection is an important data mining task with numerous practical applications.
We propose AutoOD, an automated outlier detection framework, which aims to search for an optimal neural network model.
Experimental results on various real-world benchmark datasets demonstrate that the deep model identified by AutoOD achieves the best performance.
arXiv Detail & Related papers (2020-06-19T18:57:51Z) - Sequential Recommender via Time-aware Attentive Memory Network [67.26862011527986]
We propose a temporal gating methodology to improve attention mechanism and recurrent units.
We also propose a Multi-hop Time-aware Attentive Memory network to integrate long-term and short-term preferences.
Our approach is scalable for candidate retrieval tasks and can be viewed as a non-linear generalization of latent factorization for dot-product based Top-K recommendation.
arXiv Detail & Related papers (2020-05-18T11:29:38Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.