DACOS-A Manually Annotated Dataset of Code Smells
- URL: http://arxiv.org/abs/2303.08729v1
- Date: Wed, 15 Mar 2023 16:13:40 GMT
- Title: DACOS-A Manually Annotated Dataset of Code Smells
- Authors: Himesh Nandani, Mootez Saad, Tushar Sharma
- Abstract summary: We present DACOS, a manually annotated dataset containing 10,267 annotations for 5,192 code snippets.
The dataset targets three kinds of code smells at different granularity: multifaceted abstraction, complex method, and long parameter list.
We have developed TagMan, a web application to help annotators view and mark the snippets one-by-one and record the provided annotations.
- Score: 4.753388560240438
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Researchers apply machine-learning techniques for code smell detection to
counter the subjectivity of many code smells. Such approaches need a large,
manually annotated dataset for training and benchmarking. Existing literature
offers a few datasets; however, they are small in size and, more importantly,
do not focus on the subjective code snippets. In this paper, we present DACOS,
a manually annotated dataset containing 10,267 annotations for 5,192 code
snippets. The dataset targets three kinds of code smells at different
granularity: multifaceted abstraction, complex method, and long parameter list.
The dataset is created in two phases. The first phase helps us identify the
code snippets that are potentially subjective by determining the thresholds of
metrics used to detect a smell. The second phase collects annotations for
potentially subjective snippets. We also offer an extended dataset DACOSX that
includes definitely benign and definitely smelly snippets by using the
thresholds identified in the first phase. We have developed TagMan, a web
application to help annotators view and mark the snippets one-by-one and record
the provided annotations. We make the datasets and the web application
accessible publicly. This dataset will help researchers working on smell
detection techniques to build relevant and context-aware machine-learning
models.
Related papers
- Harlequin: Color-driven Generation of Synthetic Data for Referring Expression Comprehension [4.164728134421114]
Referring Expression (REC) aims to identify a particular object in a scene by a natural language expression, and is an important topic in visual language understanding.
State-of-the-art methods for this task are based on deep learning, which generally requires expensive and manually labeled annotations.
We propose a novel framework that generates artificial data for the REC task, taking into account both textual and visual modalities.
arXiv Detail & Related papers (2024-11-22T09:08:36Z) - Anno-incomplete Multi-dataset Detection [67.69438032767613]
We propose a novel problem as "-incomplete Multi-dataset Detection"
We develop an end-to-end multi-task learning architecture which can accurately detect all the object categories with multiple partially annotated datasets.
arXiv Detail & Related papers (2024-08-29T03:58:21Z) - Open-Vocabulary Camouflaged Object Segmentation [66.94945066779988]
We introduce a new task, open-vocabulary camouflaged object segmentation (OVCOS)
We construct a large-scale complex scene dataset (textbfOVCamo) containing 11,483 hand-selected images with fine annotations and corresponding object classes.
By integrating the guidance of class semantic knowledge and the supplement of visual structure cues from the edge and depth information, the proposed method can efficiently capture camouflaged objects.
arXiv Detail & Related papers (2023-11-19T06:00:39Z) - Thinking Like an Annotator: Generation of Dataset Labeling Instructions [59.603239753484345]
We introduce a new task, Labeling Instruction Generation, to address missing publicly available labeling instructions.
We take a reasonably annotated dataset and: 1) generate a set of examples that are visually representative of each category in the dataset; 2) provide a text label that corresponds to each of the examples.
This framework acts as a proxy to human annotators that can help to both generate a final labeling instruction set and evaluate its quality.
arXiv Detail & Related papers (2023-06-24T18:32:48Z) - A systematic literature review on the code smells datasets and
validation mechanisms [13.359901661369236]
A survey of 45 existing datasets reveals that the adequacy of a dataset for detecting smells highly depends on relevant properties.
Most existing datasets support God Class, Long Method, and Feature Envy while six smells in Fowler and Beck's catalog are not supported by any datasets.
arXiv Detail & Related papers (2023-06-02T08:57:31Z) - Annotation Error Detection: Analyzing the Past and Present for a More
Coherent Future [63.99570204416711]
We reimplement 18 methods for detecting potential annotation errors and evaluate them on 9 English datasets.
We define a uniform evaluation setup including a new formalization of the annotation error detection task.
We release our datasets and implementations in an easy-to-use and open source software package.
arXiv Detail & Related papers (2022-06-05T22:31:45Z) - Omni-DETR: Omni-Supervised Object Detection with Transformers [165.4190908259015]
We consider the problem of omni-supervised object detection, which can use unlabeled, fully labeled and weakly labeled annotations.
Under this unified architecture, different types of weak labels can be leveraged to generate accurate pseudo labels.
We have found that weak annotations can help to improve detection performance and a mixture of them can achieve a better trade-off between annotation cost and accuracy.
arXiv Detail & Related papers (2022-03-30T06:36:09Z) - Simple multi-dataset detection [83.9604523643406]
We present a simple method for training a unified detector on multiple large-scale datasets.
We show how to automatically integrate dataset-specific outputs into a common semantic taxonomy.
Our approach does not require manual taxonomy reconciliation.
arXiv Detail & Related papers (2021-02-25T18:55:58Z) - Exploit Multiple Reference Graphs for Semi-supervised Relation
Extraction [12.837901211741443]
We propose to build the connection between the unlabeled data and the labeled ones.
Specifically, we first use three kinds of information to construct reference graphs.
The goal is to semantically or lexically connect the unlabeled sample(s) to the labeled one(s)
arXiv Detail & Related papers (2020-10-22T02:14:27Z) - Handling Missing Annotations in Supervised Learning Data [0.0]
Activity of Daily Living (ADL) recognition is an example of systems that exploit very large raw sensor data readings.
The size of the generated dataset is so huge that it is almost impossible for a human annotator to give a certain label to every single instance in the dataset.
In this work, we propose and investigate three different paradigms to handle these gaps.
arXiv Detail & Related papers (2020-02-17T18:23:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.