Machine Learning-Based Test Smell Detection
- URL: http://arxiv.org/abs/2208.07574v1
- Date: Tue, 16 Aug 2022 07:33:15 GMT
- Title: Machine Learning-Based Test Smell Detection
- Authors: Valeria Pontillo, Dario Amoroso d'Aragona, Fabiano Pecorelli, Dario Di
Nucci, Filomena Ferrucci, Fabio Palomba
- Abstract summary: Test smells are symptoms of sub-optimal design choices adopted when developing test cases.
We propose the design and experimentation of a novel test smell detection approach based on machine learning to detect four test smells.
- Score: 17.957877801382413
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Context: Test smells are symptoms of sub-optimal design choices adopted when
developing test cases. Previous studies have proved their harmfulness for test
code maintainability and effectiveness. Therefore, researchers have been
proposing automated, heuristic-based techniques to detect them. However, the
performance of such detectors is still limited and dependent on thresholds to
be tuned.
Objective: We propose the design and experimentation of a novel test smell
detection approach based on machine learning to detect four test smells.
Method: We plan to develop the largest dataset of manually-validated test
smells. This dataset will be leveraged to train six machine learners and assess
their capabilities in within- and cross-project scenarios. Finally, we plan to
compare our approach with state-of-the-art heuristic-based techniques.
Related papers
- Evaluating Large Language Models in Detecting Test Smells [1.5691664836504473]
The presence of test smells can negatively impact the maintainability and reliability of software.
This study aims to evaluate the capability of Large Language Models (LLMs) in automatically detecting test smells.
arXiv Detail & Related papers (2024-07-27T14:00:05Z) - Efficient Transferability Assessment for Selection of Pre-trained Detectors [63.21514888618542]
This paper studies the efficient transferability assessment of pre-trained object detectors.
We build up a detector transferability benchmark which contains a large and diverse zoo of pre-trained detectors.
Experimental results demonstrate that our method outperforms other state-of-the-art approaches in assessing transferability.
arXiv Detail & Related papers (2024-03-14T14:23:23Z) - A Discrepancy Aware Framework for Robust Anomaly Detection [51.710249807397695]
We present a Discrepancy Aware Framework (DAF), which demonstrates robust performance consistently with simple and cheap strategies.
Our method leverages an appearance-agnostic cue to guide the decoder in identifying defects, thereby alleviating its reliance on synthetic appearance.
Under the simple synthesis strategies, it outperforms existing methods by a large margin. Furthermore, it also achieves the state-of-the-art localization performance.
arXiv Detail & Related papers (2023-10-11T15:21:40Z) - Evaluating the Robustness of Test Selection Methods for Deep Neural
Networks [32.01355605506855]
Testing deep learning-based systems is crucial but challenging due to the required time and labor for labeling collected raw data.
To alleviate the labeling effort, multiple test selection methods have been proposed where only a subset of test data needs to be labeled.
This paper explores when and to what extent test selection methods fail for testing.
arXiv Detail & Related papers (2023-07-29T19:17:49Z) - Differential Analysis of Triggers and Benign Features for Black-Box DNN
Backdoor Detection [18.481370450591317]
This paper proposes a data-efficient detection method for deep neural networks against backdoor attacks under a black-box scenario.
To measure the effects of triggers and benign features on determining the backdoored network output, we introduce five metrics.
We show the efficacy of our methodology through a broad range of backdoor attacks, including ablation studies and comparison to existing approaches.
arXiv Detail & Related papers (2023-07-11T16:39:43Z) - Model-Free Sequential Testing for Conditional Independence via Testing
by Betting [8.293345261434943]
The proposed test allows researchers to analyze an incoming i.i.d. data stream with any arbitrary dependency structure.
We allow the processing of data points online as soon as they arrive and stop data acquisition once significant results are detected.
arXiv Detail & Related papers (2022-10-01T20:05:33Z) - ALBench: A Framework for Evaluating Active Learning in Object Detection [102.81795062493536]
This paper contributes an active learning benchmark framework named as ALBench for evaluating active learning in object detection.
Developed on an automatic deep model training system, this ALBench framework is easy-to-use, compatible with different active learning algorithms, and ensures the same training and testing protocols.
arXiv Detail & Related papers (2022-07-27T07:46:23Z) - TTAPS: Test-Time Adaption by Aligning Prototypes using Self-Supervision [70.05605071885914]
We propose a novel modification of the self-supervised training algorithm SwAV that adds the ability to adapt to single test samples.
We show the success of our method on the common benchmark dataset CIFAR10-C.
arXiv Detail & Related papers (2022-05-18T05:43:06Z) - On the use of test smells for prediction of flaky tests [0.0]
flaky tests hamper the evaluation of test results and can increase costs.
Existing approaches based on the use of the test case vocabulary may be context-sensitive and prone to overfitting.
We investigate the use of test smells as predictors of flaky tests.
arXiv Detail & Related papers (2021-08-26T13:21:55Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - AutoOD: Automated Outlier Detection via Curiosity-guided Search and
Self-imitation Learning [72.99415402575886]
Outlier detection is an important data mining task with numerous practical applications.
We propose AutoOD, an automated outlier detection framework, which aims to search for an optimal neural network model.
Experimental results on various real-world benchmark datasets demonstrate that the deep model identified by AutoOD achieves the best performance.
arXiv Detail & Related papers (2020-06-19T18:57:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.