Data Analysis and Performance Evaluation of Simulation Deduction Based on LLMs
- URL: http://arxiv.org/abs/2511.10651v1
- Date: Sat, 01 Nov 2025 01:32:33 GMT
- Title: Data Analysis and Performance Evaluation of Simulation Deduction Based on LLMs
- Authors: Shansi Zhang, Min Li,
- Abstract summary: Data analysis and performance evaluation of simulation deduction plays a pivotal role in modern warfare.<n>Traditional manual analysis approach is time-consuming and limited by human errors.<n>We propose a method that first decomposes the complex task into several sub-tasks and designs effective system prompts.
- Score: 10.405577443485841
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data analysis and performance evaluation of simulation deduction plays a pivotal role in modern warfare, which enables military personnel to gain invaluable insights into the potential effectiveness of different strategies, tactics, and operational plans. Traditional manual analysis approach is time-consuming and limited by human errors. To enhance efficiency and accuracy, large language models (LLMs) with strong analytical and inferencing capabilities can be employed. However, high-quality analysis reports with well-structured formatting cannot be obtained through a single instruction input to the LLM. To tackle this issue, we propose a method that first decomposes the complex task into several sub-tasks and designs effective system prompts and user prompts for each sub-task. Multi-round interactions with the LLM incorporating self-check and reflection are then conducted to enable structured data extraction as well as multi-step analysis and evaluation. Furthermore, custom tools are defined and invoked to generate figures and compute metrics. We also design multiple report templates, each tailored to a specific application and input data type, ensuring their adaptability across a variety of scenarios. Extensive evaluation results demonstrate that the reports generated by our method exhibit higher quality, therefore obtaining higher scores than the baseline method.
Related papers
- Can LLMs Clean Up Your Mess? A Survey of Application-Ready Data Preparation with LLMs [66.63911043019294]
Data preparation aims to denoise raw datasets, uncover cross-dataset relationships, and extract valuable insights from them.<n>This paper focuses on the use of LLM techniques to prepare data for diverse downstream tasks.<n>We introduce a task-centric taxonomy that organizes the field into three major tasks: data cleaning, standardization, error processing, imputation, data integration, and data enrichment.
arXiv Detail & Related papers (2026-01-22T12:02:45Z) - Teaching LLMs to Think Mathematically: A Critical Study of Decision-Making via Optimization [1.246870021158888]
This paper investigates the capabilities of large language models (LLMs) in formulating and solving decision-making problems using mathematical programming.<n>We first conduct a systematic review and meta-analysis of recent literature to assess how well LLMs understand, structure, and solve optimization problems across domains.<n>Our systematic evidence is complemented by targeted experiments designed to evaluate the performance of state-of-the-art LLMs in automatically generating optimization models for problems in computer networks.
arXiv Detail & Related papers (2025-08-25T14:52:56Z) - Towards Alignment-Centric Paradigm: A Survey of Instruction Tuning in Large Language Models [20.544181414963877]
This survey provides a comprehensive overview of the full pipeline of instruction tuning strategies.<n>We categorized data construction into three major paradigms: expert annotation, distillation from larger models, and self-improvement mechanisms.<n>We discuss promising directions for automated data generation, adaptive optimization, and robust evaluation frameworks.
arXiv Detail & Related papers (2025-08-24T01:51:55Z) - IDA-Bench: Evaluating LLMs on Interactive Guided Data Analysis [60.32962597618861]
IDA-Bench is a novel benchmark evaluating large language models in multi-round interactive scenarios.<n>Agent performance is judged by comparing its final numerical output to the human-derived baseline.<n>Even state-of-the-art coding agents (like Claude-3.7-thinking) succeed on 50% of the tasks, highlighting limitations not evident in single-turn tests.
arXiv Detail & Related papers (2025-05-23T09:37:52Z) - FACT-AUDIT: An Adaptive Multi-Agent Framework for Dynamic Fact-Checking Evaluation of Large Language Models [79.41859481668618]
Large Language Models (LLMs) have significantly advanced the fact-checking studies.<n>Existing automated fact-checking evaluation methods rely on static datasets and classification metrics.<n>We introduce FACT-AUDIT, an agent-driven framework that adaptively and dynamically assesses LLMs' fact-checking capabilities.
arXiv Detail & Related papers (2025-02-25T07:44:22Z) - AIRepr: An Analyst-Inspector Framework for Evaluating Reproducibility of LLMs in Data Science [8.281093505963158]
Large language models (LLMs) are increasingly used to automate data analysis through executable code generation.<n>We present AIRepr, an Analyst-Inspector framework for automatically evaluating and improving the of LLM-generated data analysis.
arXiv Detail & Related papers (2025-02-23T01:15:50Z) - Empowering Large Language Models in Wireless Communication: A Novel Dataset and Fine-Tuning Framework [81.29965270493238]
We develop a specialized dataset aimed at enhancing the evaluation and fine-tuning of large language models (LLMs) for wireless communication applications.<n>The dataset includes a diverse set of multi-hop questions, including true/false and multiple-choice types, spanning varying difficulty levels from easy to hard.<n>We introduce a Pointwise V-Information (PVI) based fine-tuning method, providing a detailed theoretical analysis and justification for its use in quantifying the information content of training data.
arXiv Detail & Related papers (2025-01-16T16:19:53Z) - StructTest: Benchmarking LLMs' Reasoning through Compositional Structured Outputs [78.84060166851805]
StructTest is a novel benchmark that evaluates large language models (LLMs) on their ability to follow compositional instructions and generate structured outputs.<n> Assessments are conducted deterministically using a rule-based evaluator, which can be easily extended to new tasks and datasets.<n>We demonstrate that StructTest remains challenging even for top-performing models like Deepseek-V3/R1 and GPT-4o.
arXiv Detail & Related papers (2024-12-23T22:08:40Z) - Designing Algorithms Empowered by Language Models: An Analytical Framework, Case Studies, and Insights [86.06371692309972]
This work presents an analytical framework for the design and analysis of large language models (LLMs)-based algorithms.<n>Our proposed framework serves as an attempt to mitigate such headaches.
arXiv Detail & Related papers (2024-07-20T07:39:07Z) - Can Large Language Models Serve as Data Analysts? A Multi-Agent Assisted Approach for Qualitative Data Analysis [4.539569292151314]
Large Language Models (LLMs) enable human-bot collaboration in Software Engineering (SE)<n>This study is to design and develop an LLM-based multi-agent system that synergizes human decision support with AI to automate various qualitative data analysis approaches.
arXiv Detail & Related papers (2024-02-02T13:10:46Z) - Instruction Tuning for Large Language Models: A Survey [52.86322823501338]
We make a systematic review of the literature, including the general methodology of supervised fine-tuning (SFT)<n>We also review the potential pitfalls of SFT along with criticism against it, along with efforts pointing out current deficiencies of existing strategies.
arXiv Detail & Related papers (2023-08-21T15:35:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.