Failure Identification from Unstable Log Data using Deep Learning
- URL: http://arxiv.org/abs/2204.02636v1
- Date: Wed, 6 Apr 2022 07:41:48 GMT
- Title: Failure Identification from Unstable Log Data using Deep Learning
- Authors: Jasmin Bogatinovski, Sasho Nedelkoski, Li Wu, Jorge Cardoso, Odej Kao
- Abstract summary: We present CLog as a method for failure identification.
By representing the log data as sequences of subprocesses instead of sequences of log events, the effect of the unstable log data is reduced.
Our experimental results demonstrate that the learned subprocesses representations reduce the instability in the input.
- Score: 0.27998963147546146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The reliability of cloud platforms is of significant relevance because
society increasingly relies on complex software systems running on the cloud.
To improve it, cloud providers are automating various maintenance tasks, with
failure identification frequently being considered. The precondition for
automation is the availability of observability tools, with system logs
commonly being used. The focus of this paper is log-based failure
identification. This problem is challenging because of the instability of the
log data and the incompleteness of the explicit logging failure coverage within
the code. To address the two challenges, we present CLog as a method for
failure identification. The key idea presented herein based is on our
observation that by representing the log data as sequences of subprocesses
instead of sequences of log events, the effect of the unstable log data is
reduced. CLog introduces a novel subprocess extraction method that uses
context-aware neural network and clustering methods to extract meaningful
subprocesses. The direct modeling of log event contexts allows the
identification of failures with respect to the abrupt context changes,
addressing the challenge of insufficient logging failure coverage. Our
experimental results demonstrate that the learned subprocesses representations
reduce the instability in the input, allowing CLog to outperform the baselines
on the failure identification subproblems - 1) failure detection by 9-24% on F1
score and 2) failure type identification by 7% on the macro averaged F1 score.
Further analysis shows the existent negative correlation between the
instability in the input event sequences and the detection performance in a
model-agnostic manner.
Related papers
- Demystifying and Extracting Fault-indicating Information from Logs for Failure Diagnosis [29.800380941293277]
Engineers prioritize two categories of log information for diagnosis: fault-indicating descriptions and fault-indicating parameters.
We propose an approach to automatically extract faultindicating information from logs for fault diagnosis, named LoFI.
LoFI outperforms all baseline methods by a significant margin, achieving an absolute improvement of 25.837.9 in F1 over the best baseline method, ChatGPT.
arXiv Detail & Related papers (2024-09-20T15:00:47Z) - LogFormer: A Pre-train and Tuning Pipeline for Log Anomaly Detection [73.69399219776315]
We propose a unified Transformer-based framework for Log anomaly detection (LogFormer) to improve the generalization ability across different domains.
Specifically, our model is first pre-trained on the source domain to obtain shared semantic knowledge of log data.
Then, we transfer such knowledge to the target domain via shared parameters.
arXiv Detail & Related papers (2024-01-09T12:55:21Z) - RAPID: Training-free Retrieval-based Log Anomaly Detection with PLM
considering Token-level information [7.861095039299132]
The need for log anomaly detection is growing, especially in real-world applications.
Traditional deep learning-based anomaly detection models require dataset-specific training, leading to corresponding delays.
We introduce RAPID, a model that capitalizes on the inherent features of log data to enable anomaly detection without training delays.
arXiv Detail & Related papers (2023-11-09T06:11:44Z) - EvLog: Identifying Anomalous Logs over Software Evolution [31.46106509190191]
We propose a novel unsupervised approach named Evolving Log extractor (EvLog) to process logs without parsing.
EvLog implements an anomaly discriminator with an attention mechanism to identify the anomalous logs and avoid the issue brought by the unstable sequence.
EvLog has shown effectiveness in two real-world system evolution log datasets with an average F1 score of 0.955 and 0.847 in the intra-version setting and inter-version setting, respectively.
arXiv Detail & Related papers (2023-06-02T12:58:00Z) - PULL: Reactive Log Anomaly Detection Based On Iterative PU Learning [58.85063149619348]
We propose PULL, an iterative log analysis method for reactive anomaly detection based on estimated failure time windows.
Our evaluation shows that PULL consistently outperforms ten benchmark baselines across three different datasets.
arXiv Detail & Related papers (2023-01-25T16:34:43Z) - Leveraging Log Instructions in Log-based Anomaly Detection [0.5949779668853554]
We propose a method for reliable and practical anomaly detection from system logs.
It overcomes the common disadvantage of related works by building an anomaly detection model with log instructions from the source code of 1000+ GitHub projects.
The proposed method, named ADLILog, combines the log instructions and the data from the system of interest (target system) to learn a deep neural network model.
arXiv Detail & Related papers (2022-07-07T10:22:10Z) - LogLAB: Attention-Based Labeling of Log Data Anomalies via Weak
Supervision [63.08516384181491]
We present LogLAB, a novel modeling approach for automated labeling of log messages without requiring manual work by experts.
Our method relies on estimated failure time windows provided by monitoring systems to produce precise labeled datasets in retrospect.
Our evaluation shows that LogLAB consistently outperforms nine benchmark approaches across three different datasets and maintains an F1-score of more than 0.98 even at large failure time windows.
arXiv Detail & Related papers (2021-11-02T15:16:08Z) - Robust and Transferable Anomaly Detection in Log Data using Pre-Trained
Language Models [59.04636530383049]
Anomalies or failures in large computer systems, such as the cloud, have an impact on a large number of users.
We propose a framework for anomaly detection in log data, as a major troubleshooting source of system information.
arXiv Detail & Related papers (2021-02-23T09:17:05Z) - Self-Attentive Classification-Based Anomaly Detection in Unstructured
Logs [59.04636530383049]
We propose Logsy, a classification-based method to learn log representations.
We show an average improvement of 0.25 in the F1 score, compared to the previous methods.
arXiv Detail & Related papers (2020-08-21T07:26:55Z) - Self-Supervised Log Parsing [59.04636530383049]
Large-scale software systems generate massive volumes of semi-structured log records.
Existing approaches rely on log-specifics or manual rule extraction.
We propose NuLog that utilizes a self-supervised learning model and formulates the parsing task as masked language modeling.
arXiv Detail & Related papers (2020-03-17T19:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.