Small is Beautiful: A Practical and Efficient Log Parsing Framework
- URL: http://arxiv.org/abs/2601.22590v1
- Date: Fri, 30 Jan 2026 05:37:08 GMT
- Title: Small is Beautiful: A Practical and Efficient Log Parsing Framework
- Authors: Minxing Wang, Yintong Huo,
- Abstract summary: unsupervised log parsing is a fundamental step in log analysis.<n>Large Language Models (LLMs) exhibit superior generalizability over traditional syntax-based methods.<n>This dependency leads to significant performance collapse when employing smaller, more resource-efficient LLMs.<n>We propose EF, an unsupervised LLM-based log designed to enhance the capabilities of smaller models.
- Score: 3.759117833649677
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Log parsing is a fundamental step in log analysis, partitioning raw logs into constant templates and dynamic variables. While recent semantic-based parsers leveraging Large Language Models (LLMs) exhibit superior generalizability over traditional syntax-based methods, their effectiveness is heavily contingent on model scale. This dependency leads to significant performance collapse when employing smaller, more resource-efficient LLMs. Such degradation creates a major barrier to real-world adoption, where data privacy requirements and computational constraints necessitate the use of succinct models. To bridge this gap, we propose EFParser, an unsupervised LLM-based log parser designed to enhance the capabilities of smaller models through systematic architectural innovation. EFParser introduces a dual-cache system with an adaptive updating mechanism that distinguishes between novel patterns and variations of existing templates. This allows the parser to merge redundant templates and rectify prior errors, maintaining cache consistency. Furthermore, a dedicated correction module acts as a gatekeeper, validating and refining every LLM-generated template before caching to prevent error injection. Empirical evaluations on public large-scale datasets demonstrate that EFParser outperforms state-of-the-art baselines by an average of 12.5% across all metrics when running on smaller LLMs, even surpassing some baselines utilizing large-scale models. Despite its additional validation steps, EFParser maintains high computational efficiency, offering a robust and practical solution for real-world log analysis deployment.
Related papers
- Stacked from One: Multi-Scale Self-Injection for Context Window Extension [69.24689919827817]
modelname is a novel framework based on multi-grained context compression and query-aware information acquisition.<n>modelnameachieves performance superior or comparable to strong baselines.
arXiv Detail & Related papers (2026-03-05T03:16:16Z) - DiffuRank: Effective Document Reranking with Diffusion Language Models [71.16830004674513]
We propose DiffuRank, a reranking framework built upon diffusion language models (dLLMs)<n>dLLMs support more flexible decoding and generation processes that are not constrained to a left-to-right order.<n>We show dLLMs achieve performance comparable to, and in some cases exceeding, that of autoregressive LLMs with similar model sizes.
arXiv Detail & Related papers (2026-02-13T02:18:14Z) - Model-Dowser: Data-Free Importance Probing to Mitigate Catastrophic Forgetting in Multimodal Large Language Models [2.83595986479415]
Fine-tuning Multimodal Large Language Models (MLLMs) on task-specific data is an effective way to improve performance on downstream applications.<n>Existing methods that aim to mitigate this issue either become ineffective when fine-tuning deeper layers of the language decoder or scale poorly with increasing model size.<n>We propose Model-Dowser, a novel sparse fine-tuning approach for MLLMs.
arXiv Detail & Related papers (2026-02-04T12:56:27Z) - Reversing Large Language Models for Efficient Training and Fine-Tuning [24.232966507637673]
Large Language Models (LLMs) are known for their expensive and time-consuming training.<n>We introduce memory-efficient, reversible architectures for LLMs inspired by symmetric and symplectic differential equations.<n>Our results show comparable or improved performance on several datasets and benchmarks.
arXiv Detail & Related papers (2025-11-27T19:32:15Z) - LLM-MemCluster: Empowering Large Language Models with Dynamic Memory for Text Clustering [52.41664454251679]
Large Language Models (LLMs) are reshaping unsupervised learning by offering an unprecedented ability to perform text clustering.<n>Existing methods often rely on complex pipelines with external modules, sacrificing a truly end-to-end approach.<n>We introduce LLM-MemCluster, a novel framework that reconceptualizes clustering as a fully LLM-native task.
arXiv Detail & Related papers (2025-11-19T13:22:08Z) - Enhancing Transformer-Based Rerankers with Synthetic Data and LLM-Based Supervision [0.13999481573773073]
Large Language Models (LLMs) excel at reranking due to their deep semantic understanding and reasoning.<n>Fine-tuning smaller, task-specific models is a more efficient alternative but typically on scarce, manually labeled data.<n>We propose a novel pipeline that eliminates the need for human-labeled query-document pairs.
arXiv Detail & Related papers (2025-09-23T09:47:27Z) - ScaleDoc: Scaling LLM-based Predicates over Large Document Collections [17.985997510845873]
Modern workloads increasingly involve unstructured documents, which demands semantic understanding.<n>textscScaleDoc is a novel system that addresses this by decoupling predicate execution into an offline representation phase and an optimized online filtering phase.<n>textscScaleDoc achieves over a 2$times$ end-to-end speedup and reduces expensive LLM invocations by up to 85%, making large-scale semantic analysis practical and efficient.
arXiv Detail & Related papers (2025-09-16T03:18:06Z) - LatentLLM: Attention-Aware Joint Tensor Compression [50.33925662486034]
Large language models (LLMs) and large multi-modal models (LMMs) require a massive amount of computational and memory resources.<n>We propose a new framework to convert such LLMs/LMMs into a reduced-dimension latent structure.
arXiv Detail & Related papers (2025-05-23T22:39:54Z) - The Unreasonable Effectiveness of Model Merging for Cross-Lingual Transfer in LLMs [45.08958917457921]
Large language models (LLMs) still struggle across tasks outside of high-resource languages.<n>In this work, we investigate cross-lingual transfer to lower-resource languages where task-specific post-training data is scarce.
arXiv Detail & Related papers (2025-05-23T20:28:31Z) - Beyond Quacking: Deep Integration of Language Models and RAG into DuckDB [44.057784044659726]
Large language models (LLMs) have made it easier to prototype such retrieval and reasoning data pipelines.<n>This often involves orchestrating data systems, managing data movement, and handling low-level details.<n>We introduce FlockMTL: an extension for abstractions that integrates deeply LLM capabilities and retrieval-augmented generation.
arXiv Detail & Related papers (2025-04-01T19:48:17Z) - Matchmaker: Self-Improving Large Language Model Programs for Schema Matching [60.23571456538149]
We propose a compositional language model program for schema matching, comprised of candidate generation, refinement and confidence scoring.
Matchmaker self-improves in a zero-shot manner without the need for labeled demonstrations.
Empirically, we demonstrate on real-world medical schema matching benchmarks that Matchmaker outperforms previous ML-based approaches.
arXiv Detail & Related papers (2024-10-31T16:34:03Z) - Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language Models [79.41139393080736]
Large language models (LLMs) have rapidly advanced and demonstrated impressive capabilities.
In-Context Learning (ICL) and.
Efficient Fine-Tuning (PEFT) are currently two mainstream methods for augmenting.
LLMs to downstream tasks.
We propose Reference Trustable Decoding (RTD), a paradigm that allows models to quickly adapt to new tasks without fine-tuning.
arXiv Detail & Related papers (2024-09-30T10:48:20Z) - LILAC: Log Parsing using LLMs with Adaptive Parsing Cache [38.04960745458878]
We propose LILAC, the first practical log parsing framework using large language models (LLMs) with adaptive parsing cache.
LLMs's lack of specialized log parsing capabilities currently hinders their accuracy in parsing.
We show LILAC outperforms state-of-the-art methods by 69.5% in terms of the average F1 score of template accuracy.
arXiv Detail & Related papers (2023-10-03T04:46:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.