MMT: Mutation Testing of Java Bytecode with Model Transformation -- An Illustrative Demonstration
- URL: http://arxiv.org/abs/2404.14097v1
- Date: Mon, 22 Apr 2024 11:33:21 GMT
- Title: MMT: Mutation Testing of Java Bytecode with Model Transformation -- An Illustrative Demonstration
- Authors: Christoph Bockisch, Gabriele Taentzer, Daniel Neufeld,
- Abstract summary: mutation testing is an approach to check the robustness of test suites.
We propose a model-driven approach where mutations of Java bytecode can be flexibly defined by model transformation.
The corresponding tool called MMT has been extended with advanced mutation operators for modifying object-oriented structures.
- Score: 0.11470070927586014
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mutation testing is an approach to check the robustness of test suites. The program code is slightly changed by mutations to inject errors. A test suite is robust enough if it finds such errors. Tools for mutation testing usually integrate sets of mutation operators such as, for example, swapping arithmetic operators; modern tools typically work with compiled code such as Java bytecode. In this case, the mutations must be defined in such a way that the mutated program still can be loaded and executed. The results of mutation tests depend directly on the possible mutations. More advanced mutations and even domain-specific mutations can pose another challenge to the test suite. Since extending the classical approaches to more complex mutations is not well supported and is difficult, we propose a model-driven approach where mutations of Java bytecode can be flexibly defined by model transformation. The corresponding tool called MMT has been extended with advanced mutation operators for modifying object-oriented structures, Java-specific properties and method calls of APIs, making it the only mutation testing tool for Java bytecode that supports such mutations.
Related papers
- Algorithmic Capabilities of Random Transformers [49.73113518329544]
We investigate what functions can be learned by randomly transformers in which only the embedding layers are optimized.
We find that these random transformers can perform a wide range of meaningful algorithmic tasks.
Our results indicate that some algorithmic capabilities are present in transformers even before these models are trained.
arXiv Detail & Related papers (2024-10-06T06:04:23Z) - Fuzzing MLIR Compilers with Custom Mutation Synthesis [6.617861009996863]
We develop a new test generator called SYNTHFUZZ that combines grammar-based fuzzing with custom synthesis mutation.
It obviates the need to manually define custom mutation operators for each dialect.
Our evaluation shows that SYNTHFUZZ on average improves MLIR dialect pair coverage by 1.75 times, which increases branch coverage by 1.22 times.
arXiv Detail & Related papers (2024-04-25T18:00:37Z) - LLMorpheus: Mutation Testing using Large Language Models [7.312170216336085]
This paper presents a technique where a Large Language Model (LLM) is prompted to suggest mutations by asking it what placeholders that have been inserted in source code could be replaced with.
We find LLMorpheus to be capable of producing mutants that resemble existing bugs that cannot be produced by StrykerJS, a state-of-the-art mutation testing tool.
arXiv Detail & Related papers (2024-04-15T17:25:14Z) - An Empirical Evaluation of Manually Created Equivalent Mutants [54.02049952279685]
Less than 10 % of manually created mutants are equivalent.
Surprisingly, our findings indicate that a significant portion of developers struggle to accurately identify equivalent mutants.
arXiv Detail & Related papers (2024-04-14T13:04:10Z) - Enhancing Genetic Improvement Mutations Using Large Language Models [47.62003403631452]
Large language models (LLMs) have been successfully applied to software engineering tasks, including program repair.
We evaluate the use of LLMs as mutation operators for Genetic Improvement (GI) to improve the search process.
We find that the number of patches passing unit tests is up to 75% higher with LLM-based edits than with standard Insert edits.
arXiv Detail & Related papers (2023-10-18T10:24:14Z) - Contextual Predictive Mutation Testing [17.832774161583036]
We introduce MutationBERT, an approach for predictive mutation testing that simultaneously encodes the source method mutation and test method.
Thanks to its higher precision, MutationBERT saves 33% of the time spent by a prior approach on checking/verifying live mutants.
We validate our input representation, and aggregation approaches for lifting predictions from the test matrix level to the test suite level, finding similar improvements in performance.
arXiv Detail & Related papers (2023-09-05T17:00:15Z) - Learning Transformer Programs [78.9509560355733]
We introduce a procedure for training Transformers that are mechanistically interpretable by design.
Instead of compiling human-written programs into Transformers, we design a modified Transformer that can be trained using gradient-based optimization.
The Transformer Programs can automatically find reasonable solutions, performing on par with standard Transformers of comparable size.
arXiv Detail & Related papers (2023-06-01T20:27:01Z) - Mutation Testing of Deep Reinforcement Learning Based on Real Faults [11.584571002297217]
This paper builds on the existing approach of Mutation Testing (MT) to extend it to Reinforcement Learning (RL) systems.
We show that the design choice of the mutation killing definition can affect whether or not a mutation is killed as well as the generated test cases.
arXiv Detail & Related papers (2023-01-13T16:45:56Z) - Equivariant Disentangled Transformation for Domain Generalization under
Combination Shift [91.38796390449504]
Combinations of domains and labels are not observed during training but appear in the test environment.
We provide a unique formulation of the combination shift problem based on the concepts of homomorphism, equivariance, and a refined definition of disentanglement.
arXiv Detail & Related papers (2022-08-03T12:31:31Z) - MutFormer: A context-dependent transformer-based model to predict
pathogenic missense mutations [5.153619184788929]
missense mutations account for approximately half of the known variants responsible for human inherited diseases.
Recent advances in deep learning show that transformer models are particularly powerful at modeling sequences.
We introduce MutFormer, a transformer-based model for prediction of pathogenic missense mutations.
arXiv Detail & Related papers (2021-10-27T20:17:35Z) - Glushkov's construction for functional subsequential transducers [91.3755431537592]
Glushkov's construction has many interesting properties and they become even more evident when applied to transducers.
Special flavour of regular expressions is introduced, which can be efficiently converted to $epsilon$-free functional subsequential weighted finite state transducers.
arXiv Detail & Related papers (2020-08-05T17:09:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.