DeepMutation: A Neural Mutation Tool
- URL: http://arxiv.org/abs/2002.04760v2
- Date: Thu, 13 Feb 2020 01:32:42 GMT
- Title: DeepMutation: A Neural Mutation Tool
- Authors: Michele Tufano, Jason Kimko, Shiya Wang, Cody Watson, Gabriele Bavota,
Massimiliano Di Penta, Denys Poshyvanyk
- Abstract summary: DeepMutation is a tool wrapping our deep learning model into a fully automated tool chain.
It can generate, inject, and test mutants learned from real faults.
- Score: 26.482720255691646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mutation testing can be used to assess the fault-detection capabilities of a
given test suite. To this aim, two characteristics of mutation testing
frameworks are of paramount importance: (i) they should generate mutants that
are representative of real faults; and (ii) they should provide a complete tool
chain able to automatically generate, inject, and test the mutants. To address
the first point, we recently proposed an approach using a Recurrent Neural
Network Encoder-Decoder architecture to learn mutants from ~787k faults mined
from real programs. The empirical evaluation of this approach confirmed its
ability to generate mutants representative of real faults. In this paper, we
address the second point, presenting DeepMutation, a tool wrapping our deep
learning model into a fully automated tool chain able to generate, inject, and
test mutants learned from real faults. Video:
https://sites.google.com/view/learning-mutation/deepmutation
Related papers
- muPRL: A Mutation Testing Pipeline for Deep Reinforcement Learning based on Real Faults [19.32186653723838]
We first describe a taxonomy of real RL faults obtained by repository mining.
Then, we present the mutation operators derived from such real faults and implemented in the tool muPRL.
We discuss the experimental results, showing that muPRL is effective at discriminating strong from weak test generators.
arXiv Detail & Related papers (2024-08-27T15:45:13Z) - Open-Set Deepfake Detection: A Parameter-Efficient Adaptation Method with Forgery Style Mixture [58.60915132222421]
We introduce an approach that is both general and parameter-efficient for face forgery detection.
We design a forgery-style mixture formulation that augments the diversity of forgery source domains.
We show that the designed model achieves state-of-the-art generalizability with significantly reduced trainable parameters.
arXiv Detail & Related papers (2024-08-23T01:53:36Z) - SINDER: Repairing the Singular Defects of DINOv2 [61.98878352956125]
Vision Transformer models trained on large-scale datasets often exhibit artifacts in the patch token they extract.
We propose a novel fine-tuning smooth regularization that rectifies structural deficiencies using only a small dataset.
arXiv Detail & Related papers (2024-07-23T20:34:23Z) - An Empirical Evaluation of Manually Created Equivalent Mutants [54.02049952279685]
Less than 10 % of manually created mutants are equivalent.
Surprisingly, our findings indicate that a significant portion of developers struggle to accurately identify equivalent mutants.
arXiv Detail & Related papers (2024-04-14T13:04:10Z) - Predicting loss-of-function impact of genetic mutations: a machine
learning approach [0.0]
This paper aims to train machine learning models on the attributes of a genetic mutation to predict LoFtool scores.
These attributes included, but were not limited to, the position of a mutation on a chromosome, changes in amino acids, and changes in codons caused by the mutation.
Models were evaluated using five-fold cross-validated averages of r-squared, mean squared error, root mean squared error, mean absolute error, and explained variance.
arXiv Detail & Related papers (2024-01-26T19:27:38Z) - Cal-DETR: Calibrated Detection Transformer [67.75361289429013]
We propose a mechanism for calibrated detection transformers (Cal-DETR), particularly for Deformable-DETR, UP-DETR and DINO.
We develop an uncertainty-guided logit modulation mechanism that leverages the uncertainty to modulate the class logits.
Results corroborate the effectiveness of Cal-DETR against the competing train-time methods in calibrating both in-domain and out-domain detections.
arXiv Detail & Related papers (2023-11-06T22:13:10Z) - Mutation Testing of Deep Reinforcement Learning Based on Real Faults [11.584571002297217]
This paper builds on the existing approach of Mutation Testing (MT) to extend it to Reinforcement Learning (RL) systems.
We show that the design choice of the mutation killing definition can affect whether or not a mutation is killed as well as the generated test cases.
arXiv Detail & Related papers (2023-01-13T16:45:56Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - SLA$^2$P: Self-supervised Anomaly Detection with Adversarial
Perturbation [77.71161225100927]
Anomaly detection is a fundamental yet challenging problem in machine learning.
We propose a novel and powerful framework, dubbed as SLA$2$P, for unsupervised anomaly detection.
arXiv Detail & Related papers (2021-11-25T03:53:43Z) - DeepMetis: Augmenting a Deep Learning Test Set to Increase its Mutation
Score [4.444652484439581]
tool is effective at augmenting the given test set, increasing its capability to detect mutants by 63% on average.
A leave-one-out experiment shows that the augmented test set is capable of exposing unseen mutants.
arXiv Detail & Related papers (2021-09-15T18:20:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.