Test code generation at Ericsson using Program Analysis Augmented Fine Tuned LLMs
- URL: http://arxiv.org/abs/2506.11006v1
- Date: Wed, 23 Apr 2025 18:18:18 GMT
- Title: Test code generation at Ericsson using Program Analysis Augmented Fine Tuned LLMs
- Authors: Sai Krishna, Balvinder Singh, Sujoy Roychowdhury, Giriprasad Sridhara, Sourav Mazumdar, Magnus Sandelin, Dimitris Rentas, Maciej Nalepa, Karol Sawicki, Jakub Gajda,
- Abstract summary: We describe test code generation using Large Language Models (LLMs) in Ericsson.<n>Our input is a test step in natural language (English) and our output is code (Java) which accomplishes the test step.
- Score: 1.4798334915529776
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We describe test code generation using Large Language Models (LLMs) in Ericsson. Our input is a test step in natural language (English) and our output is code (Java) which accomplishes the test step. We describe how straight forward prompting does not suffice and results in LLM assuming functions and signatures which are not present in the code repository. We then show how we alleviate the problem by a combination of Retrieval Augmented Generation (RAG) along with prompt engineering that expanded the simple prompt with additional contextual information using static program analysis. We then describe further improvements that we obtained by fine-tuning the underlying LLM. The fine tuning is done based on a custom designed prompt template which has pre-dependent classes, their public methods as well two exemplar outputs obtained from RAG. Our results establish that our fine tuned models help improve the correspondence or conformity with the original developer written test code as measured by the traditional metrics of F1-score based on the methods used in the generated code. Fine tuning of a 8x7b Mixture of Experts (MoE) model leads to an average improvement of 8\% over the base model and is comparable to the scores on a much larger 8x22b MoE model.
Related papers
- IFEvalCode: Controlled Code Generation [69.28317223249358]
The paper introduces forward and backward constraints generation to improve the instruction-following capabilities of Code LLMs.<n>The authors present IFEvalCode, a multilingual benchmark comprising 1.6K test samples across seven programming languages.
arXiv Detail & Related papers (2025-07-30T08:08:48Z) - Seed&Steer: Guiding Large Language Models with Compilable Prefix and Branch Signals for Unit Test Generation [20.083515771706473]
Unit tests play a vital role in the software development lifecycle.<n>Recent advances in Large Language Model (LLM)-based approaches have significantly improved automated test generation.<n>We propose Seed&Steer, a two-step approach that combines traditional unit testing techniques with the capabilities of large language models.
arXiv Detail & Related papers (2025-07-23T07:16:46Z) - Private GPTs for LLM-driven testing in software development and machine learning [0.0]
We examine the capability of private GPTs to automatically generate executable test code based on requirements.<n>We use acceptance criteria as input, formulated as part of epics, or stories, which are typically used in modern development processes.
arXiv Detail & Related papers (2025-06-06T20:05:41Z) - Sample, Don't Search: Rethinking Test-Time Alignment for Language Models [55.2480439325792]
We introduce QAlign, a new test-time alignment approach.<n>As we scale test-time compute, QAlign converges to sampling from the optimal aligned distribution for each individual prompt.<n>By adopting recent advances in Markov chain Monte Carlo for text generation, our method enables better-aligned outputs without modifying the underlying model or even requiring logit access.
arXiv Detail & Related papers (2025-04-04T00:41:40Z) - Learning to Solve and Verify: A Self-Play Framework for Code and Test Generation [69.62857948698436]
Recent advances in large language models (LLMs) have improved their performance on coding benchmarks.<n>However, improvement is plateauing due to the exhaustion of readily available high-quality data.<n>We propose Sol-Ver, a self-play solver-verifier framework that jointly improves a single model's code and test generation capacity.
arXiv Detail & Related papers (2025-02-20T18:32:19Z) - Aligning Language Models with Demonstrated Feedback [58.834937450242975]
Demonstration ITerated Task Optimization (DITTO) directly aligns language model outputs to a user's demonstrated behaviors.<n>We evaluate DITTO's ability to learn fine-grained style and task alignment across domains such as news articles, emails, and blog posts.
arXiv Detail & Related papers (2024-06-02T23:13:56Z) - StepCoder: Improve Code Generation with Reinforcement Learning from
Compiler Feedback [58.20547418182074]
We introduce StepCoder, a novel framework for code generation, consisting of two main components.
CCCS addresses the exploration challenge by breaking the long sequences code generation task into a Curriculum of Code Completion Subtasks.
FGO only optimize the model by masking the unexecuted code segments to provide Fine-Grained Optimization.
Our method improves the ability to explore the output space and outperforms state-of-the-art approaches in corresponding benchmarks.
arXiv Detail & Related papers (2024-02-02T13:14:31Z) - Code-Aware Prompting: A study of Coverage Guided Test Generation in Regression Setting using LLM [32.44432906540792]
We present SymPrompt, a code-aware prompting strategy for large language models in test generation.
SymPrompt enhances correct test generations by a factor of 5 and bolsters relative coverage by 26% for CodeGen2.
Notably, when applied to GPT-4, SymPrompt improves coverage by over 2x compared to baseline prompting strategies.
arXiv Detail & Related papers (2024-01-31T18:21:49Z) - Coarse-Tuning Models of Code with Reinforcement Learning Feedback [0.0]
Large Language Models (LLMs) pre-trained on code have emerged as the dominant approach to program synthesis.
We propose RLCF, that further trains a pre-trained LLM via reinforcement learning, using feedback from a grounding function that scores the quality of the code.
arXiv Detail & Related papers (2023-05-25T22:09:08Z) - Fully Autonomous Programming with Large Language Models [0.9558392439655015]
Current approaches to program synthesis with Large Language Models (LLMs) exhibit a "near miss syndrome"
We use OpenAI Codex as the LLM and Program Synthesis Benchmark 2 as a database of problem descriptions and tests for evaluation.
The resulting framework outperforms both conventional usage of Codex without the repair phase and traditional genetic programming approaches.
arXiv Detail & Related papers (2023-04-20T16:12:05Z) - LEVER: Learning to Verify Language-to-Code Generation with Execution [64.36459105535]
We propose LEVER, a simple approach to improve language-to-code generation by learning to verify the generated programs with their execution results.
Specifically, we train verifiers to determine whether a program sampled from the LLMs is correct or not based on the natural language input, the program itself and its execution results.
LEVER consistently improves over the base code LLMs(4.6% to 10.9% with code-davinci) and achieves new state-of-the-art results on all of them.
arXiv Detail & Related papers (2023-02-16T18:23:22Z) - Interactive Code Generation via Test-Driven User-Intent Formalization [60.90035204567797]
Large language models (LLMs) produce code from informal natural language (NL) intent.
It is hard to define a notion of correctness since natural language can be ambiguous and lacks a formal semantics.
We describe a language-agnostic abstract algorithm and a concrete implementation TiCoder.
arXiv Detail & Related papers (2022-08-11T17:41:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.