Anomaly Detection in OKTA Logs using Autoencoders
- URL: http://arxiv.org/abs/2411.07314v1
- Date: Mon, 11 Nov 2024 19:15:05 GMT
- Title: Anomaly Detection in OKTA Logs using Autoencoders
- Authors: Jericho Cain, Hayden Beadles, Karthik Venkatesan,
- Abstract summary: Okta logs are used to detect cybersecurity events using various rule-based models with restricted look back periods.
These functions have limitations, such as a limited retrospective analysis, a predefined rule set, and susceptibility to generating false positives.
We adopt unsupervised techniques, specifically employing autoencoders.
- Score: 0.0
- License:
- Abstract: Okta logs are used today to detect cybersecurity events using various rule-based models with restricted look back periods. These functions have limitations, such as a limited retrospective analysis, a predefined rule set, and susceptibility to generating false positives. To address this, we adopt unsupervised techniques, specifically employing autoencoders. To properly use an autoencoder, we need to transform and simplify the complexity of the log data we receive from our users. This transformed and filtered data is then fed into the autoencoder, and the output is evaluated.
Related papers
- The Conformer Encoder May Reverse the Time Dimension [53.9351497436903]
We analyze the initial behavior of the decoder cross-attention mechanism and find that it encourages the Conformer encoder self-attention to build a connection between the initial frames and all other informative frames.
We propose several methods and ideas of how this flipping can be avoided.
arXiv Detail & Related papers (2024-10-01T13:39:05Z) - Hold Me Tight: Stable Encoder-Decoder Design for Speech Enhancement [1.4037575966075835]
1-D filters on raw audio are hard to train and often suffer from instabilities.
We address these problems with hybrid solutions, combining theory-driven and data-driven approaches.
arXiv Detail & Related papers (2024-08-30T15:49:31Z) - LogFormer: A Pre-train and Tuning Pipeline for Log Anomaly Detection [73.69399219776315]
We propose a unified Transformer-based framework for Log anomaly detection (LogFormer) to improve the generalization ability across different domains.
Specifically, our model is first pre-trained on the source domain to obtain shared semantic knowledge of log data.
Then, we transfer such knowledge to the target domain via shared parameters.
arXiv Detail & Related papers (2024-01-09T12:55:21Z) - UNFUSED: UNsupervised Finetuning Using SElf supervised Distillation [53.06337011259031]
We introduce UnFuSeD, a novel approach to leverage self-supervised learning for audio classification.
We use the encoder to generate pseudo-labels for unsupervised fine-tuning before the actual fine-tuning step.
UnFuSeD achieves state-of-the-art results on the LAPE Benchmark, significantly outperforming all our baselines.
arXiv Detail & Related papers (2023-03-10T02:43:36Z) - A Robust and Explainable Data-Driven Anomaly Detection Approach For
Power Electronics [56.86150790999639]
We present two anomaly detection and classification approaches, namely the Matrix Profile algorithm and anomaly transformer.
The Matrix Profile algorithm is shown to be well suited as a generalizable approach for detecting real-time anomalies in streaming time-series data.
A series of custom filters is created and added to the detector to tune its sensitivity, recall, and detection accuracy.
arXiv Detail & Related papers (2022-09-23T06:09:35Z) - Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation [18.59238482225795]
We extend Mai et al.'s proposed Emb2Emb method to learn mappings in the embedding space of an autoencoder.
We propose Bag-of-AEs Autoencoders (BoV-AEs), which encode the text into a variable-size bag of vectors that grows with the size of the text.
This allows to encode and reconstruct much longer texts than standard autoencoders.
arXiv Detail & Related papers (2021-10-13T19:30:40Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Anomaly Detection With Partitioning Overfitting Autoencoder Ensembles [0.0]
We propose POTATOES (Partitioning OverfiTting AuTOencoder EnSemble), a new method for unsupervised outlier detection (UOD)
The idea is to not regularize at all, but to rather randomly partition the data into sufficiently many equally sized parts, overfit each part with its own autoencoder, and to use the maximum over all autoencoder reconstruction errors as the anomaly score.
For indeed, our method is made available on so the reader can recreate the results in this paper as well as apply the method to other autoencoders and datasets.
arXiv Detail & Related papers (2020-09-06T15:35:53Z) - Revisiting Role of Autoencoders in Adversarial Settings [32.22707594954084]
This paper presents the inherent property of adversarial robustness in the autoencoders.
We believe that our discovery of the adversarial robustness of the autoencoders can provide clues to the future research and applications for adversarial defense.
arXiv Detail & Related papers (2020-05-21T16:01:23Z) - On Sparsifying Encoder Outputs in Sequence-to-Sequence Models [90.58793284654692]
We take Transformer as the testbed and introduce a layer of gates in-between the encoder and the decoder.
The gates are regularized using the expected value of the sparsity-inducing L0penalty.
We investigate the effects of this sparsification on two machine translation and two summarization tasks.
arXiv Detail & Related papers (2020-04-24T16:57:52Z) - Batch norm with entropic regularization turns deterministic autoencoders
into generative models [14.65554816300632]
The variational autoencoder is a well defined deep generative model.
We show in this work that utilizing batch normalization as a source for non-determinism suffices to turn deterministic autoencoders into generative models.
arXiv Detail & Related papers (2020-02-25T02:42:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.