A comprehensible analysis of the efficacy of Ensemble Models for Bug
Prediction
- URL: http://arxiv.org/abs/2310.12133v1
- Date: Wed, 18 Oct 2023 17:43:54 GMT
- Title: A comprehensible analysis of the efficacy of Ensemble Models for Bug
Prediction
- Authors: Ingrid Mar\c{c}al and Rog\'erio Eduardo Garcia
- Abstract summary: We present a comparison and analysis of the efficacy of two AI-based approaches, namely single AI models and ensemble AI models, for predicting the probability of a Java class being buggy.
Our experimental findings indicate that the ensemble of AI models can outperform the results of applying individual AI models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The correctness of software systems is vital for their effective operation.
It makes discovering and fixing software bugs an important development task.
The increasing use of Artificial Intelligence (AI) techniques in Software
Engineering led to the development of a number of techniques that can assist
software developers in identifying potential bugs in code. In this paper, we
present a comprehensible comparison and analysis of the efficacy of two
AI-based approaches, namely single AI models and ensemble AI models, for
predicting the probability of a Java class being buggy. We used two open-source
Apache Commons Project's Java components for training and evaluating the
models. Our experimental findings indicate that the ensemble of AI models can
outperform the results of applying individual AI models. We also offer insight
into the factors that contribute to the enhanced performance of the ensemble AI
model. The presented results demonstrate the potential of using ensemble AI
models to enhance bug prediction results, which could ultimately result in more
reliable software systems.
Related papers
- Ratio law: mathematical descriptions for a universal relationship between AI performance and input samples [0.0]
We show a ratio law showing that model performance and the ratio of minority to majority samples can be closely linked by two concise equations.
We mathematically proved that an AI model achieves its optimal performance on a balanced dataset.
arXiv Detail & Related papers (2024-11-01T13:43:19Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - Next-Gen Software Engineering: AI-Assisted Big Models [0.0]
This paper aims to facilitate a synthesis between models and AI in software engineering.
The paper provides an overview of the current status of AI-assisted software engineering.
A vision of AI-assisted Big Models in SE is put forth, with the aim of capitalising on the advantages inherent to both approaches.
arXiv Detail & Related papers (2024-09-26T16:49:57Z) - Adaptation of XAI to Auto-tuning for Numerical Libraries [0.0]
Explainable AI (XAI) technology is gaining prominence, aiming to streamline AI model development and alleviate the burden of explaining AI outputs to users.
This research focuses on XAI for AI models when integrated into two different processes for practical numerical computations.
arXiv Detail & Related papers (2024-05-12T09:00:56Z) - AI Model Utilization Measurements For Finding Class Encoding Patterns [2.702380921892937]
This work addresses the problems of designing utilization measurements of trained artificial intelligence (AI) models.
The problems are motivated by the lack of explainability of AI models in security and safety critical applications.
arXiv Detail & Related papers (2022-12-12T02:18:10Z) - Enabling Automated Machine Learning for Model-Driven AI Engineering [60.09869520679979]
We propose a novel approach to enable Model-Driven Software Engineering and Model-Driven AI Engineering.
In particular, we support Automated ML, thus assisting software engineers without deep AI knowledge in developing AI-intensive systems.
arXiv Detail & Related papers (2022-03-06T10:12:56Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Human-Algorithm Collaboration: Achieving Complementarity and Avoiding
Unfairness [92.26039686430204]
We show that even in carefully-designed systems, complementary performance can be elusive.
First, we provide a theoretical framework for modeling simple human-algorithm systems.
Next, we use this model to prove conditions where complementarity is impossible.
arXiv Detail & Related papers (2022-02-17T18:44:41Z) - Data-Driven and SE-assisted AI Model Signal-Awareness Enhancement and
Introspection [61.571331422347875]
We propose a data-driven approach to enhance models' signal-awareness.
We combine the SE concept of code complexity with the AI technique of curriculum learning.
We achieve up to 4.8x improvement in model signal awareness.
arXiv Detail & Related papers (2021-11-10T17:58:18Z) - A Model-Driven Engineering Approach to Machine Learning and Software
Modeling [0.5156484100374059]
Models are used in both the Software Engineering (SE) and the Artificial Intelligence (AI) communities.
The main focus is on the Internet of Things (IoT) and smart Cyber-Physical Systems (CPS) use cases, where both ML and model-driven SE play a key role.
arXiv Detail & Related papers (2021-07-06T15:50:50Z) - Evaluation Toolkit For Robustness Testing Of Automatic Essay Scoring
Systems [64.4896118325552]
We evaluate the current state-of-the-art AES models using a model adversarial evaluation scheme and associated metrics.
We find that AES models are highly overstable. Even heavy modifications(as much as 25%) with content unrelated to the topic of the questions do not decrease the score produced by the models.
arXiv Detail & Related papers (2020-07-14T03:49:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.