ReAssert: Deep Learning for Assert Generation
- URL: http://arxiv.org/abs/2011.09784v1
- Date: Thu, 19 Nov 2020 11:55:59 GMT
- Title: ReAssert: Deep Learning for Assert Generation
- Authors: Robert White and Jens Krinke
- Abstract summary: We present RE-ASSERT, an approach for the automated generation of JUnit test asserts.
This is achieved by targeting projects individually, using precise code-to-test traceability for learning.
We also utilise Reformer, a state-of-the-art deep learning model, along with two models from previous work to evaluate ReAssert and an existing approach, known as ATLAS.
- Score: 3.8174671362014956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The automated generation of test code can reduce the time and effort required
to build software while increasing its correctness and robustness. In this
paper, we present RE-ASSERT, an approach for the automated generation of JUnit
test asserts which produces more accurate asserts than previous work with fewer
constraints. This is achieved by targeting projects individually, using precise
code-to-test traceability for learning and by generating assert statements from
the method-under-test directly without the need to write an assert-less test
first. We also utilise Reformer, a state-of-the-art deep learning model, along
with two models from previous work to evaluate ReAssert and an existing
approach, known as ATLAS, using lexical accuracy,uniqueness, and dynamic
analysis. Our evaluation of ReAssert shows up to 44% of generated asserts for a
single project match exactly with the ground truth, increasing to 51% for
generated asserts that compile. We also improve on the ATLAS results through
our use of Reformer with 28% of generated asserts matching exactly with the
ground truth. Reformer also produces the greatest proportion of unique asserts
(71%), giving further evidence that Reformer produces the most useful asserts.
Related papers
- Automated Proof Generation for Rust Code via Self-Evolution [69.25795662658356]
We introduce SAFE, a novel framework that overcomes the lack of human-written proof to enable automated proof generation of Rust code.
We demonstrate superior efficiency and precision compared to GPT-4o.
This advancement leads to a significant improvement in performance, achieving a 70.50% accuracy rate in a benchmark crafted by human experts.
arXiv Detail & Related papers (2024-10-21T08:15:45Z) - Localizing Factual Inconsistencies in Attributable Text Generation [91.981439746404]
We introduce QASemConsistency, a new formalism for localizing factual inconsistencies in attributable text generation.
We first demonstrate the effectiveness of the QASemConsistency methodology for human annotation.
We then implement several methods for automatically detecting localized factual inconsistencies.
arXiv Detail & Related papers (2024-10-09T22:53:48Z) - Improving LLM-based Unit test generation via Template-based Repair [8.22619177301814]
Unit test is crucial for detecting bugs in individual program units but consumes time and effort.
Large language models (LLMs) have demonstrated remarkable reasoning and generation capabilities.
In this paper, we propose TestART, a novel unit test generation method.
arXiv Detail & Related papers (2024-08-06T10:52:41Z) - Chat-like Asserts Prediction with the Support of Large Language Model [34.140962210930624]
We introduce Chat-like execution-based Asserts Prediction (tool) for generating meaningful assert statements for Python projects.
tool utilizes the persona, Chain-of-Thought, and one-shot learning techniques in the prompt design, and conducts rounds of communication with LLM and Python interpreter.
Our evaluation demonstrates that tool achieves 64.7% accuracy for single assert statement generation and 62% for overall assert statement generation.
arXiv Detail & Related papers (2024-07-31T08:27:03Z) - Re-ReST: Reflection-Reinforced Self-Training for Language Agents [101.22559705696885]
Self-training in language agents can generate supervision from the agent itself.
We present Reflection-Reinforced Self-Training (Re-ReST), which uses a textitreflector to refine low-quality generated samples.
arXiv Detail & Related papers (2024-06-03T16:21:38Z) - Retrieval is Accurate Generation [99.24267226311157]
We introduce a novel method that selects context-aware phrases from a collection of supporting documents.
Our model achieves the best performance and the lowest latency among several retrieval-augmented baselines.
arXiv Detail & Related papers (2024-02-27T14:16:19Z) - Revisiting and Improving Retrieval-Augmented Deep Assertion Generation [13.373681113601982]
Unit testing has become an essential activity in software development process.
Yu et al. proposed an integrated approach (integration for short) to generate assertions for a unit test.
Despite promising, there is still a knowledge gap as to why or where integration works or does not work.
arXiv Detail & Related papers (2023-09-19T02:39:02Z) - SAGA: Summarization-Guided Assert Statement Generation [34.51502565985728]
This paper presents a novel summarization-guided approach for automatically generating assert statements.
We leverage a pre-trained language model as the reference architecture and fine-tune it on the task of assert statement generation.
arXiv Detail & Related papers (2023-05-24T07:03:21Z) - Listen, Adapt, Better WER: Source-free Single-utterance Test-time
Adaptation for Automatic Speech Recognition [65.84978547406753]
Test-time Adaptation aims to adapt the model trained on source domains to yield better predictions for test samples.
Single-Utterance Test-time Adaptation (SUTA) is the first TTA study in speech area to our best knowledge.
arXiv Detail & Related papers (2022-03-27T06:38:39Z) - PRover: Proof Generation for Interpretable Reasoning over Rules [81.40404921232192]
We propose a transformer-based model that answers binary questions over rule-bases and generates the corresponding proofs.
Our model learns to predict nodes and edges corresponding to proof graphs in an efficient constrained training paradigm.
We conduct experiments on synthetic, hand-authored, and human-paraphrased rule-bases to show promising results for QA and proof generation.
arXiv Detail & Related papers (2020-10-06T15:47:53Z) - Generating Accurate Assert Statements for Unit Test Cases using
Pretrained Transformers [10.846226514357866]
Unit testing represents the foundational basis of the software testing pyramid.
We present an approach to support developers in writing unit test cases by generating accurate and useful assert statements.
arXiv Detail & Related papers (2020-09-11T19:35:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.