JustinANN: Realistic Test Generation for Java Programs Driven by Annotations
- URL: http://arxiv.org/abs/2505.05715v1
- Date: Fri, 09 May 2025 01:31:46 GMT
- Title: JustinANN: Realistic Test Generation for Java Programs Driven by Annotations
- Authors: Baoquan Cui, Rong Qu, Jian Zhang,
- Abstract summary: We propose JustinANN, a flexible and scalable tool to generate test cases for Java programs.<n>Our approach is easier to generate test data in, on and outside the boundaries of the requirement domain.
- Score: 8.620106576663622
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated test case generation is important. However, the automatically generated test input does not always make sense, and the automated assertion is difficult to validate against the program under test. In this paper, we propose JustinANN, a flexible and scalable tool to generate test cases for Java programs, providing realistic test inputs and assertions. We have observed that, in practice, Java programs contain a large number of annotations from programs, which can be considered as part of the user specification. We design a systematic annotation set with 7 kinds of annotations and 4 combination rules based on them to modify complex Java objects. Annotations that modify the fields or return variables of methods can be used to generate assertions that represent the true intent of the program, and the ones that modify the input parameters can be used to generate test inputs that match the real business requirement. We have conducted experiments to evaluate the approach on open source Java programs. The results show that the annotations and their combinations designed in this paper are compatible with existing annotations; our approach is easier to generate test data in, on and outside the boundaries of the requirement domain; and it also helps to find program defects.
Related papers
- Automated Test Generation from Program Documentation Encoded in Code Comments [4.696083734269232]
This paper introduces a novel test generation technique that exploits the code-comment documentation constructively.<n>We deliver test cases with names and oracles properly contextualized on the target behaviors.
arXiv Detail & Related papers (2025-04-29T20:23:56Z) - Understanding and Characterizing Mock Assertions in Unit Tests [12.96550571237691]
Despite their significance, mock assertions are rarely considered by automated test generation techniques.<n>Our analysis of 4,652 test cases from 11 popular Java projects reveals that mock assertions are mostly applied to validating specific kinds of method calls.<n>We find that mock assertions complement traditional test assertions by ensuring the desired side effects have been produced.
arXiv Detail & Related papers (2025-03-25T02:35:05Z) - LLM-based Unit Test Generation for Dynamically-Typed Programs [16.38145000434927]
TypeTest is a novel framework that enhances type correctness in test generation through a vector-based Retrieval-Augmented Generation system.<n>In an evaluation on 125 real-world Python modules, TypeTest achieved an average statement coverage of 86.6% and branch coverage of 76.8%, outperforming state-of-theart tools by 5.4% and 9.3%, respectively.
arXiv Detail & Related papers (2025-03-18T08:07:17Z) - Learning to Generate Unit Tests for Automated Debugging [52.63217175637201]
Unit tests (UTs) play an instrumental role in assessing code correctness as well as providing feedback to large language models (LLMs)<n>We propose UTGen, which teaches LLMs to generate unit test inputs that reveal errors along with their correct expected outputs.<n>We show that UTGen outperforms other LLM-based baselines by 7.59% based on a metric measuring the presence of both error-revealing UT inputs and correct UT outputs.
arXiv Detail & Related papers (2025-02-03T18:51:43Z) - Commit0: Library Generation from Scratch [77.38414688148006]
Commit0 is a benchmark that challenges AI agents to write libraries from scratch.<n>Agents are provided with a specification document outlining the library's API as well as a suite of interactive unit tests.<n> Commit0 also offers an interactive environment where models receive static analysis and execution feedback on the code they generate.
arXiv Detail & Related papers (2024-12-02T18:11:30Z) - Generating executable oracles to check conformance of client code to requirements of JDK Javadocs using LLMs [21.06722050714324]
This paper focuses on automation of test oracles for clients of widely used Java libraries, e.g., java.lang and java.util packages.<n>We use large language models as an enabling technology to embody our insight into a framework for test oracle automation.
arXiv Detail & Related papers (2024-11-04T04:24:25Z) - Automatic Generation of Test Cases based on Bug Reports: a Feasibility
Study with Large Language Models [4.318319522015101]
Existing approaches produce test cases that either can be qualified as simple (e.g. unit tests) or that require precise specifications.
Most testing procedures still rely on test cases written by humans to form test suites.
We investigate the feasibility of performing this generation by leveraging large language models (LLMs) and using bug reports as inputs.
arXiv Detail & Related papers (2023-10-10T05:30:12Z) - Generative Judge for Evaluating Alignment [84.09815387884753]
We propose a generative judge with 13B parameters, Auto-J, designed to address these challenges.
Our model is trained on user queries and LLM-generated responses under massive real-world scenarios.
Experimentally, Auto-J outperforms a series of strong competitors, including both open-source and closed-source models.
arXiv Detail & Related papers (2023-10-09T07:27:15Z) - Teaching Large Language Models to Self-Debug [62.424077000154945]
Large language models (LLMs) have achieved impressive performance on code generation.
We propose Self- Debugging, which teaches a large language model to debug its predicted program via few-shot demonstrations.
arXiv Detail & Related papers (2023-04-11T10:43:43Z) - TypeT5: Seq2seq Type Inference using Static Analysis [51.153089609654174]
We present a new type inference method that treats type prediction as a code infilling task.
Our method uses static analysis to construct dynamic contexts for each code element whose type signature is to be predicted by the model.
We also propose an iterative decoding scheme that incorporates previous type predictions in the model's input context.
arXiv Detail & Related papers (2023-03-16T23:48:00Z) - Interactive Code Generation via Test-Driven User-Intent Formalization [60.90035204567797]
Large language models (LLMs) produce code from informal natural language (NL) intent.
It is hard to define a notion of correctness since natural language can be ambiguous and lacks a formal semantics.
We describe a language-agnostic abstract algorithm and a concrete implementation TiCoder.
arXiv Detail & Related papers (2022-08-11T17:41:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.