Towards a standardized methodology and dataset for evaluating LLM-based digital forensic timeline analysis
- URL: http://arxiv.org/abs/2505.03100v1
- Date: Tue, 06 May 2025 01:22:30 GMT
- Title: Towards a standardized methodology and dataset for evaluating LLM-based digital forensic timeline analysis
- Authors: Hudan Studiawan, Frank Breitinger, Mark Scanlon,
- Abstract summary: Large language models (LLMs) have seen widespread adoption in many domains including digital forensics.<n>This paper proposes a standardized methodology to quantitatively evaluate the application of LLMs for digital forensic tasks.
- Score: 0.1433758865948252
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Large language models (LLMs) have seen widespread adoption in many domains including digital forensics. While prior research has largely centered on case studies and examples demonstrating how LLMs can assist forensic investigations, deeper explorations remain limited, i.e., a standardized approach for precise performance evaluations is lacking. Inspired by the NIST Computer Forensic Tool Testing Program, this paper proposes a standardized methodology to quantitatively evaluate the application of LLMs for digital forensic tasks, specifically in timeline analysis. The paper describes the components of the methodology, including the dataset, timeline generation, and ground truth development. Additionally, the paper recommends using BLEU and ROUGE metrics for the quantitative evaluation of LLMs through case studies or tasks involving timeline analysis. Experimental results using ChatGPT demonstrate that the proposed methodology can effectively evaluate LLM-based forensic timeline analysis. Finally, we discuss the limitations of applying LLMs to forensic timeline analysis.
Related papers
- How do Large Language Models Understand Relevance? A Mechanistic Interpretability Perspective [64.00022624183781]
Large language models (LLMs) can assess relevance and support information retrieval (IR) tasks.<n>We investigate how different LLM modules contribute to relevance judgment through the lens of mechanistic interpretability.
arXiv Detail & Related papers (2025-04-10T16:14:55Z) - Digital Forensics in the Age of Large Language Models [3.743376683377091]
Large Language Models (LLMs) have emerged as powerful tools capable of automating and enhancing various digital forensic tasks.<n>This paper aims to provide an accessible and systematic overview of how LLM has revolutionized the digital forensics approach.
arXiv Detail & Related papers (2025-04-03T18:32:15Z) - Can LLMs Help Uncover Insights about LLMs? A Large-Scale, Evolving Literature Analysis of Frontier LLMs [32.48924329288906]
This study presents a semi-automated approach for literature analysis that accelerates data extraction using LLMs.<n>It automatically identifies relevant arXiv papers, extracts experimental results and related attributes, and organizes them into a structured dataset, LLMEvalDB.<n>We then conduct an automated literature analysis of frontier LLMs, reducing the effort of paper surveying and data extraction by more than 93% compared to manual approaches.
arXiv Detail & Related papers (2025-02-26T03:56:34Z) - Position: Empowering Time Series Reasoning with Multimodal LLMs [49.73647759532127]
We argue that multimodal language models (MLLMs) can enable more powerful and flexible reasoning for time series analysis.<n>We call on researchers and practitioners to leverage this potential by developing strategies that prioritize trust, interpretability, and robust reasoning in MLLMs.
arXiv Detail & Related papers (2025-02-03T16:10:48Z) - Comparative Analysis of Open-Source Language Models in Summarizing Medical Text Data [5.443548415516227]
Large Language Models (LLMs) have demonstrated superior performance in question answering and summarization tasks on unstructured text data.
We propose an evaluation approach to analyze the performance of open-source LLMs for medical summarization tasks.
arXiv Detail & Related papers (2024-05-25T16:16:22Z) - DnA-Eval: Enhancing Large Language Model Evaluation through Decomposition and Aggregation [75.81096662788254]
Large Language Models (LLMs) are scalable and economical evaluators.<n>The question of how reliable these evaluators are has emerged as a crucial research question.<n>We propose Decompose and Aggregate, which breaks down the evaluation process into different stages based on pedagogical practices.
arXiv Detail & Related papers (2024-05-24T08:12:30Z) - The Emergence of Large Language Models in Static Analysis: A First Look
through Micro-Benchmarks [3.848607479075651]
We investigate the role that current Large Language Models (LLMs) can play in improving callgraph analysis and type inference for Python programs.
Our study reveals that LLMs show promising results in type inference, demonstrating higher accuracy than traditional methods, yet they exhibit limitations in callgraph analysis.
arXiv Detail & Related papers (2024-02-27T16:53:53Z) - LLM Inference Unveiled: Survey and Roofline Model Insights [62.92811060490876]
Large Language Model (LLM) inference is rapidly evolving, presenting a unique blend of opportunities and challenges.
Our survey stands out from traditional literature reviews by not only summarizing the current state of research but also by introducing a framework based on roofline model.
This framework identifies the bottlenecks when deploying LLMs on hardware devices and provides a clear understanding of practical problems.
arXiv Detail & Related papers (2024-02-26T07:33:05Z) - Expanding Horizons in HCI Research Through LLM-Driven Qualitative
Analysis [3.5253513747455303]
We introduce a new approach to qualitative analysis in HCI using Large Language Models (LLMs)
Our findings indicate that LLMs not only match the efficacy of traditional analysis methods but also offer unique insights.
arXiv Detail & Related papers (2024-01-07T12:39:31Z) - T-Eval: Evaluating the Tool Utilization Capability of Large Language
Models Step by Step [69.64348626180623]
Large language models (LLM) have achieved remarkable performance on various NLP tasks.
How to evaluate and analyze the tool-utilization capability of LLMs is still under-explored.
We introduce T-Eval to evaluate the tool utilization capability step by step.
arXiv Detail & Related papers (2023-12-21T17:02:06Z) - Sentiment Analysis in the Era of Large Language Models: A Reality Check [69.97942065617664]
This paper investigates the capabilities of large language models (LLMs) in performing various sentiment analysis tasks.
We evaluate performance across 13 tasks on 26 datasets and compare the results against small language models (SLMs) trained on domain-specific datasets.
arXiv Detail & Related papers (2023-05-24T10:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.