syslrn: Learning What to Monitor for Efficient Anomaly Detection
- URL: http://arxiv.org/abs/2203.15324v1
- Date: Tue, 29 Mar 2022 08:10:06 GMT
- Title: syslrn: Learning What to Monitor for Efficient Anomaly Detection
- Authors: Davide Sanvito, Giuseppe Siracusano, Sharan Santhanam, Roberto
Gonzalez, Roberto Bifulco
- Abstract summary: syslrn is a system that first builds an understanding of a target system offline, and then tailors the online monitoring instrumentation based on the learned identifiers of normal behavior.
We show in a case study for the monitoring of failures that it can outperform state-of-the-art log-analysis systems with little overhead.
- Score: 3.071931695335886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While monitoring system behavior to detect anomalies and failures is
important, existing methods based on log-analysis can only be as good as the
information contained in the logs, and other approaches that look at the
OS-level software state introduce high overheads. We tackle the problem with
syslrn, a system that first builds an understanding of a target system offline,
and then tailors the online monitoring instrumentation based on the learned
identifiers of normal behavior. While our syslrn prototype is still preliminary
and lacks many features, we show in a case study for the monitoring of
OpenStack failures that it can outperform state-of-the-art log-analysis systems
with little overhead.
Related papers
- LogSD: Detecting Anomalies from System Logs through Self-supervised Learning and Frequency-based Masking [14.784236273395017]
We propose LogSD, a novel semi-supervised self-supervised learning approach.
We show that LogSD significantly outperforms eight state-of-the-art benchmark methods.
arXiv Detail & Related papers (2024-04-17T12:00:09Z) - Log Summarisation for Defect Evolution Analysis [14.055261850785456]
We suggest an online semantic-based clustering approach to error logs.
We also introduce a novel metric to evaluate the performance of temporal log clusters.
arXiv Detail & Related papers (2024-03-13T09:18:46Z) - MoniLog: An Automated Log-Based Anomaly Detection System for Cloud
Computing Infrastructures [3.04585143845864]
MoniLog is a distributed approach to detect real-time anomalies within large-scale environments.
It aims to detect sequential and quantitative anomalies within a multi-source log stream.
arXiv Detail & Related papers (2023-04-24T09:21:52Z) - Interactive System-wise Anomaly Detection [66.3766756452743]
Anomaly detection plays a fundamental role in various applications.
It is challenging for existing methods to handle the scenarios where the instances are systems whose characteristics are not readily observed as data.
We develop an end-to-end approach which includes an encoder-decoder module that learns system embeddings.
arXiv Detail & Related papers (2023-04-21T02:20:24Z) - PULL: Reactive Log Anomaly Detection Based On Iterative PU Learning [58.85063149619348]
We propose PULL, an iterative log analysis method for reactive anomaly detection based on estimated failure time windows.
Our evaluation shows that PULL consistently outperforms ten benchmark baselines across three different datasets.
arXiv Detail & Related papers (2023-01-25T16:34:43Z) - Leveraging Log Instructions in Log-based Anomaly Detection [0.5949779668853554]
We propose a method for reliable and practical anomaly detection from system logs.
It overcomes the common disadvantage of related works by building an anomaly detection model with log instructions from the source code of 1000+ GitHub projects.
The proposed method, named ADLILog, combines the log instructions and the data from the system of interest (target system) to learn a deep neural network model.
arXiv Detail & Related papers (2022-07-07T10:22:10Z) - LogLAB: Attention-Based Labeling of Log Data Anomalies via Weak
Supervision [63.08516384181491]
We present LogLAB, a novel modeling approach for automated labeling of log messages without requiring manual work by experts.
Our method relies on estimated failure time windows provided by monitoring systems to produce precise labeled datasets in retrospect.
Our evaluation shows that LogLAB consistently outperforms nine benchmark approaches across three different datasets and maintains an F1-score of more than 0.98 even at large failure time windows.
arXiv Detail & Related papers (2021-11-02T15:16:08Z) - A2Log: Attentive Augmented Log Anomaly Detection [53.06341151551106]
Anomaly detection becomes increasingly important for the dependability and serviceability of IT services.
Existing unsupervised methods need anomaly examples to obtain a suitable decision boundary.
We develop A2Log, which is an unsupervised anomaly detection method consisting of two steps: Anomaly scoring and anomaly decision.
arXiv Detail & Related papers (2021-09-20T13:40:21Z) - Experience Report: Deep Learning-based System Log Analysis for Anomaly
Detection [30.52620190783608]
We provide a review and evaluation on five popular models used by six state-of-the-art anomaly detectors.
Four of the selected methods are unsupervised and the remaining two are supervised.
We believe our work can serve as a basis in this field and contribute to the future academic researches and industrial applications.
arXiv Detail & Related papers (2021-07-13T08:10:47Z) - Robust and Transferable Anomaly Detection in Log Data using Pre-Trained
Language Models [59.04636530383049]
Anomalies or failures in large computer systems, such as the cloud, have an impact on a large number of users.
We propose a framework for anomaly detection in log data, as a major troubleshooting source of system information.
arXiv Detail & Related papers (2021-02-23T09:17:05Z) - Self-Attentive Classification-Based Anomaly Detection in Unstructured
Logs [59.04636530383049]
We propose Logsy, a classification-based method to learn log representations.
We show an average improvement of 0.25 in the F1 score, compared to the previous methods.
arXiv Detail & Related papers (2020-08-21T07:26:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.