A Novel Taxonomy and Classification Scheme for Code Smell Interactions
- URL: http://arxiv.org/abs/2504.18469v1
- Date: Fri, 25 Apr 2025 16:24:11 GMT
- Title: A Novel Taxonomy and Classification Scheme for Code Smell Interactions
- Authors: Ruchin Gupta, Sandeep Kumar Singh,
- Abstract summary: This study presents a novel taxonomy and a proposed classification scheme for the possible code smell interactions.<n>Experiments have been carried out using several popular machine learning (ML) models.<n>Results primarily show the presence of code smell interactions namely Inter-smell Detection within domain.
- Score: 2.6597689982591044
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Code smells are indicators of potential design flaws in source code and do not appear alone but in combination with other smells, creating complex interactions. While existing literature classifies these smell interactions into collocated, coupled, and inter-smell relations, however, to the best of our knowledge, no research has used the existing knowledge of code smells and (or) their relationships with other code smells in the detection of code smells. This gap highlights the need for deeper investigation into how code smells interact with each other and assist in their detection. This would improve the overall comprehension of code smells and how they interact more effectively. This study presents a novel taxonomy and a proposed classification scheme for the possible code smell interactions considering a specific programming language as a domain. This paper has dealt with one scenario called Inter smell detection within the domain. The experiments have been carried out using several popular machine learning (ML) models. Results primarily show the presence of code smell interactions namely Inter-smell Detection within domain. These results are compatible with the available facts in the literature suggesting a promising direction for future research in code smell detection.
Related papers
- EnseSmells: Deep ensemble and programming language models for automated code smells detection [3.974095344344234]
A smell in software source code denotes an indication of suboptimal design and implementation decisions.<n>This paper proposes a novel approach to code smell detection, constructing a deep learning architecture that places importance on the fusion of structural features and statistical semantics.
arXiv Detail & Related papers (2025-02-07T15:35:19Z) - How Propense Are Large Language Models at Producing Code Smells? A Benchmarking Study [45.126233498200534]
We introduce CodeSmellEval, a benchmark designed to evaluate the propensity of Large Language Models for generating code smells.<n>Our benchmark includes a novel metric: Propensity Smelly Score (PSC), and a curated dataset of method-level code smells: CodeSmellData.<n>To demonstrate the use of CodeSmellEval, we conducted a case study with two state-of-the-art LLMs, CodeLlama and Mistral.
arXiv Detail & Related papers (2024-12-25T21:56:35Z) - Unlocking Potential Binders: Multimodal Pretraining DEL-Fusion for Denoising DNA-Encoded Libraries [51.72836644350993]
Multimodal Pretraining DEL-Fusion model (MPDF)
We develop pretraining tasks applying contrastive objectives between different compound representations and their text descriptions.
We propose a novel DEL-fusion framework that amalgamates compound information at the atomic, submolecular, and molecular levels.
arXiv Detail & Related papers (2024-09-07T17:32:21Z) - On the Prevalence, Evolution, and Impact of Code Smells in Simulation Modelling Software [2.608075651391582]
This paper investigates the prevalence, evolution, and impact of code smells in simulation software systems.
Certain code smells (e.g. Long Statement, Magic Number) are more prevalent in simulation software systems than in traditional software systems.
Our experiments show that some code smells such as Magic Number and Long List can survive a long time in simulation software systems.
arXiv Detail & Related papers (2024-09-06T00:47:02Z) - FKA-Owl: Advancing Multimodal Fake News Detection through Knowledge-Augmented LVLMs [48.32113486904612]
We propose FKA-Owl, a framework that leverages forgery-specific knowledge to augment Large Vision-Language Models (LVLMs)
Experiments on the public benchmark demonstrate that FKA-Owl achieves superior cross-domain performance compared to previous methods.
arXiv Detail & Related papers (2024-03-04T12:35:09Z) - Prompt Learning for Multi-Label Code Smell Detection: A Promising
Approach [6.74877139507271]
Code smells indicate the potential problems of software quality so that developers can identify opportunities by detecting code smells.
We propose textitPromptSmell, a novel approach based on prompt learning for detecting multi-label code smell.
arXiv Detail & Related papers (2024-02-16T01:50:46Z) - InterCode: Standardizing and Benchmarking Interactive Coding with
Execution Feedback [50.725076393314964]
We introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning environment.
Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution.
We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies.
arXiv Detail & Related papers (2023-06-26T17:59:50Z) - Empirical Analysis on Effectiveness of NLP Methods for Predicting Code
Smell [3.2973778921083357]
A code smell is a surface indicator of an inherent problem in the system.
We use three Extreme learning machine kernels over 629 packages to identify eight code smells.
Our findings indicate that the radial basis functional kernel performs best out of the three kernel methods with a mean accuracy of 98.52.
arXiv Detail & Related papers (2021-08-08T12:10:20Z) - COSEA: Convolutional Code Search with Layer-wise Attention [90.35777733464354]
We propose a new deep learning architecture, COSEA, which leverages convolutional neural networks with layer-wise attention to capture the code's intrinsic structural logic.
COSEA can achieve significant improvements over state-of-the-art methods on code search tasks.
arXiv Detail & Related papers (2020-10-19T13:53:38Z) - Visual Relationship Detection with Visual-Linguistic Knowledge from
Multimodal Representations [103.00383924074585]
Visual relationship detection aims to reason over relationships among salient objects in images.
We propose a novel approach named Visual-Linguistic Representations from Transformers (RVL-BERT)
RVL-BERT performs spatial reasoning with both visual and language commonsense knowledge learned via self-supervised pre-training.
arXiv Detail & Related papers (2020-09-10T16:15:09Z) - Visual Compositional Learning for Human-Object Interaction Detection [111.05263071111807]
Human-Object interaction (HOI) detection aims to localize and infer relationships between human and objects in an image.
It is challenging because an enormous number of possible combinations of objects and verbs types forms a long-tail distribution.
We devise a deep Visual Compositional Learning framework, which is a simple yet efficient framework to effectively address this problem.
arXiv Detail & Related papers (2020-07-24T08:37:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.