Assertion Messages with Large Language Models (LLMs) for Code
- URL: http://arxiv.org/abs/2509.19673v1
- Date: Wed, 24 Sep 2025 01:13:08 GMT
- Title: Assertion Messages with Large Language Models (LLMs) for Code
- Authors: Ahmed Aljohani, Anamul Haque Mollah, Hyunsook Do,
- Abstract summary: We introduce an evaluation of four state-of-the-art Fill-in-the-Middle (FIM) LLMs on a dataset of 216 Java test methods containing developer-written assertion messages.<n>We find that Codestral-22B achieves the highest quality score of 2.76 out of 5 using a human-like evaluation approach, compared to 3.24 for manually written messages.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Assertion messages significantly enhance unit tests by clearly explaining the reasons behind test failures, yet they are frequently omitted by developers and automated test-generation tools. Despite recent advancements, Large Language Models (LLMs) have not been systematically evaluated for their ability to generate informative assertion messages. In this paper, we introduce an evaluation of four state-of-the-art Fill-in-the-Middle (FIM) LLMs - Qwen2.5-Coder-32B, Codestral-22B, CodeLlama-13B, and StarCoder - on a dataset of 216 Java test methods containing developer-written assertion messages. We find that Codestral-22B achieves the highest quality score of 2.76 out of 5 using a human-like evaluation approach, compared to 3.24 for manually written messages. Our ablation study shows that including descriptive test comments further improves Codestral's performance to 2.97, highlighting the critical role of context in generating clear assertion messages. Structural analysis demonstrates that all models frequently replicate developers' preferred linguistic patterns. We discuss the limitations of the selected models and conventional text evaluation metrics in capturing diverse assertion message structures. Our benchmark, evaluation results, and discussions provide an essential foundation for advancing automated, context-aware generation of assertion messages in test code. A replication package is available at https://doi.org/10.5281/zenodo.15293133
Related papers
- Assertion-Aware Test Code Summarization with Large Language Models [0.0]
Unit tests often lack concise summaries that convey test intent.<n>This paper presents a new benchmark of 91 real-world Java test cases paired with developer-written summaries.
arXiv Detail & Related papers (2025-11-09T04:58:32Z) - Learning Robust Negation Text Representations [60.23044940174016]
We propose a strategy to improve negation of text encoders using diverse patterns of negation and hedging.<n>We observe large improvement in negation understanding capabilities while maintaining competitive performance on general benchmarks.<n>Our method can be adapted to LLMs, leading to improved performance on negation benchmarks.
arXiv Detail & Related papers (2025-07-17T04:48:54Z) - Understanding and Characterizing Mock Assertions in Unit Tests [12.96550571237691]
Despite their significance, mock assertions are rarely considered by automated test generation techniques.<n>Our analysis of 4,652 test cases from 11 popular Java projects reveals that mock assertions are mostly applied to validating specific kinds of method calls.<n>We find that mock assertions complement traditional test assertions by ensuring the desired side effects have been produced.
arXiv Detail & Related papers (2025-03-25T02:35:05Z) - CLOVER: A Test Case Generation Benchmark with Coverage, Long-Context, and Verification [71.34070740261072]
This paper presents a benchmark, CLOVER, to evaluate models' capabilities in generating and completing test cases.<n>The benchmark is containerized for code execution across tasks, and we will release the code, data, and construction methodologies.
arXiv Detail & Related papers (2025-02-12T21:42:56Z) - ASSERTIFY: Utilizing Large Language Models to Generate Assertions for Production Code [0.7973214627863593]
Production assertions are statements embedded in the code to help developers validate their assumptions about the code.
Current assertion generation techniques, such as static analysis and deep learning, fall short when it comes to generating production assertions.
This preprint addresses the gap by introducing Assertify, an automated end-to-end tool that leverages Large Language Models (LLMs) and prompt engineering to generate production assertions.
arXiv Detail & Related papers (2024-11-25T20:52:28Z) - Localizing Factual Inconsistencies in Attributable Text Generation [74.11403803488643]
We introduce QASemConsistency, a new formalism for localizing factual inconsistencies in attributable text generation.<n>We show that QASemConsistency yields factual consistency scores that correlate well with human judgments.
arXiv Detail & Related papers (2024-10-09T22:53:48Z) - On the Rationale and Use of Assertion Messages in Test Code: Insights from Software Practitioners [10.264620067797798]
Unit testing is an important practice that helps ensure the quality of a software system by validating its behavior through a series of test cases.
Core to these test cases are assertion statements, which enable software practitioners to validate the correctness of the system's behavior.
To aid with understanding and troubleshooting test case failures, practitioners can include a message (i.e., assertion message) within the assertion statement.
arXiv Detail & Related papers (2024-08-03T11:13:36Z) - Chat-like Asserts Prediction with the Support of Large Language Model [34.140962210930624]
We introduce Chat-like execution-based Asserts Prediction (tool) for generating meaningful assert statements for Python projects.
tool utilizes the persona, Chain-of-Thought, and one-shot learning techniques in the prompt design, and conducts rounds of communication with LLM and Python interpreter.
Our evaluation demonstrates that tool achieves 64.7% accuracy for single assert statement generation and 62% for overall assert statement generation.
arXiv Detail & Related papers (2024-07-31T08:27:03Z) - Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph [83.90988015005934]
Uncertainty quantification is a key element of machine learning applications.<n>We introduce a novel benchmark that implements a collection of state-of-the-art UQ baselines.<n>We conduct a large-scale empirical investigation of UQ and normalization techniques across eleven tasks, identifying the most effective approaches.
arXiv Detail & Related papers (2024-06-21T20:06:31Z) - SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal [64.9938658716425]
SORRY-Bench is a proposed benchmark for evaluating large language models' (LLMs) ability to recognize and reject unsafe user requests.<n>First, existing methods often use coarse-grained taxonomy of unsafe topics, and are over-representing some fine-grained topics.<n>Second, linguistic characteristics and formatting of prompts are often overlooked, like different languages, dialects, and more -- which are only implicitly considered in many evaluations.
arXiv Detail & Related papers (2024-06-20T17:56:07Z) - Evaluating Generative Language Models in Information Extraction as Subjective Question Correction [49.729908337372436]
We propose a new evaluation method, SQC-Score.
Inspired by the principles in subjective question correction, we propose a new evaluation method, SQC-Score.
Results on three information extraction tasks show that SQC-Score is more preferred by human annotators than the baseline metrics.
arXiv Detail & Related papers (2024-04-04T15:36:53Z) - SAGA: Summarization-Guided Assert Statement Generation [34.51502565985728]
This paper presents a novel summarization-guided approach for automatically generating assert statements.
We leverage a pre-trained language model as the reference architecture and fine-tune it on the task of assert statement generation.
arXiv Detail & Related papers (2023-05-24T07:03:21Z) - Teaching Large Language Models to Self-Debug [62.424077000154945]
Large language models (LLMs) have achieved impressive performance on code generation.
We propose Self- Debugging, which teaches a large language model to debug its predicted program via few-shot demonstrations.
arXiv Detail & Related papers (2023-04-11T10:43:43Z) - Generating Accurate Assert Statements for Unit Test Cases using
Pretrained Transformers [10.846226514357866]
Unit testing represents the foundational basis of the software testing pyramid.
We present an approach to support developers in writing unit test cases by generating accurate and useful assert statements.
arXiv Detail & Related papers (2020-09-11T19:35:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.