Generative AI for Software Metadata: Overview of the Information
Retrieval in Software Engineering Track at FIRE 2023
- URL: http://arxiv.org/abs/2311.03374v1
- Date: Fri, 27 Oct 2023 14:13:23 GMT
- Title: Generative AI for Software Metadata: Overview of the Information
Retrieval in Software Engineering Track at FIRE 2023
- Authors: Srijoni Majumdar, Soumen Paul, Debjyoti Paul, Ayan Bandyopadhyay,
Samiran Chattopadhyay, Partha Pratim Das, Paul D Clough, Prasenjit Majumder
- Abstract summary: The Information Retrieval in Software Engineering (IRSE) track aims to develop solutions for automated evaluation of code comments.
The dataset consists of 9048 code comments and surrounding code snippet pairs extracted from open source C based projects.
The labels generated from large language models increase the bias in the prediction model but lead to less over-fitted results.
- Score: 18.616716369775883
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Information Retrieval in Software Engineering (IRSE) track aims to
develop solutions for automated evaluation of code comments in a machine
learning framework based on human and large language model generated labels. In
this track, there is a binary classification task to classify comments as
useful and not useful. The dataset consists of 9048 code comments and
surrounding code snippet pairs extracted from open source github C based
projects and an additional dataset generated individually by teams using large
language models. Overall 56 experiments have been submitted by 17 teams from
various universities and software companies. The submissions have been
evaluated quantitatively using the F1-Score and qualitatively based on the type
of features developed, the supervised learning model used and their
corresponding hyper-parameters. The labels generated from large language models
increase the bias in the prediction model but lead to less over-fitted results.
Related papers
- GenCodeSearchNet: A Benchmark Test Suite for Evaluating Generalization
in Programming Language Understanding [5.9535699822923]
We propose a new benchmark dataset called GenCodeSearchNet (GeCS) to evaluate the programming language understanding capabilities of language models.
As part of the full dataset, we introduce a new, manually curated subset StatCodeSearch that focuses on R, a popular but so far underrepresented programming language.
For evaluation and comparison, we collect several baseline results using fine-tuned BERT-style models and GPT-style large language models.
arXiv Detail & Related papers (2023-11-16T09:35:00Z) - Unifying the Perspectives of NLP and Software Engineering: A Survey on Language Models for Code [24.936022005837415]
We review the recent advancements in software engineering with language models, covering 70+ models, 40+ evaluation tasks, 180+ datasets, and 900 related works.
We break down code processing models into general language models represented by the GPT family and specialized models that are specifically pretrained on code.
We also go beyond programming and review LLMs' application in other software engineering activities including requirement engineering, testing, deployment, and operations.
arXiv Detail & Related papers (2023-11-14T08:34:26Z) - EvalCrafter: Benchmarking and Evaluating Large Video Generation Models [70.19437817951673]
We argue that it is hard to judge the large conditional generative models from the simple metrics since these models are often trained on very large datasets with multi-aspect abilities.
Our approach involves generating a diverse and comprehensive list of 700 prompts for text-to-video generation.
Then, we evaluate the state-of-the-art video generative models on our carefully designed benchmark, in terms of visual qualities, content qualities, motion qualities, and text-video alignment with 17 well-selected objective metrics.
arXiv Detail & Related papers (2023-10-17T17:50:46Z) - Leveraging Generative AI: Improving Software Metadata Classification
with Generated Code-Comment Pairs [0.0]
In software development, code comments play a crucial role in enhancing code comprehension and collaboration.
This research paper addresses the challenge of objectively classifying code comments as "Useful" or "Not Useful"
We propose a novel solution that harnesses contextualized embeddings, particularly BERT, to automate this classification process.
arXiv Detail & Related papers (2023-10-14T12:09:43Z) - Software Metadata Classification based on Generative Artificial
Intelligence [0.0]
This paper presents a novel approach to enhance the performance of binary code comment quality classification models through the application of Generative Artificial Intelligence (AI)
By leveraging the OpenAI API, a dataset comprising 1239 newly generated code-comment pairs has been labelled as "Useful" or "Not Useful"
The results affirm the effectiveness of this methodology, indicating its applicability in broader contexts within software development and quality assurance domains.
arXiv Detail & Related papers (2023-10-14T07:38:16Z) - L2CEval: Evaluating Language-to-Code Generation Capabilities of Large
Language Models [102.00201523306986]
We present L2CEval, a systematic evaluation of the language-to-code generation capabilities of large language models (LLMs)
We analyze the factors that potentially affect their performance, such as model size, pretraining data, instruction tuning, and different prompting methods.
In addition to assessing model performance, we measure confidence calibration for the models and conduct human evaluations of the output programs.
arXiv Detail & Related papers (2023-09-29T17:57:00Z) - Software Entity Recognition with Noise-Robust Learning [31.259250137320468]
We leverage the Wikipedia taxonomy to develop a comprehensive entity lexicon with 79K unique software entities in 12 fine-grained types.
We then propose self-regularization, a noise-robust learning approach, to the training of our software entity recognition model by accounting for many dropouts.
Results show that models trained with self-regularization outperform both their vanilla counterparts and state-of-the-art approaches on our Wikipedia benchmark and two Stack Overflow benchmarks.
arXiv Detail & Related papers (2023-08-21T08:41:46Z) - CodeExp: Explanatory Code Document Generation [94.43677536210465]
Existing code-to-text generation models produce only high-level summaries of code.
We conduct a human study to identify the criteria for high-quality explanatory docstring for code.
We present a multi-stage fine-tuning strategy and baseline models for the task.
arXiv Detail & Related papers (2022-11-25T18:05:44Z) - GEMv2: Multilingual NLG Benchmarking in a Single Line of Code [161.1761414080574]
Generation, Evaluation, and Metrics Benchmark introduces a modular infrastructure for dataset, model, and metric developers.
GEMv2 supports 40 documented datasets in 51 languages.
Models for all datasets can be evaluated online and our interactive data card creation and rendering tools make it easier to add new datasets to the living benchmark.
arXiv Detail & Related papers (2022-06-22T17:52:30Z) - Benchmarking Generalization via In-Context Instructions on 1,600+
Language Tasks [95.06087720086133]
Natural-Instructions v2 is a collection of 1,600+ diverse language tasks and their expert written instructions.
The benchmark covers 70+ distinct task types, such as tagging, in-filling, and rewriting.
This benchmark enables large-scale evaluation of cross-task generalization of the models.
arXiv Detail & Related papers (2022-04-16T03:12:30Z) - GENIE: A Leaderboard for Human-in-the-Loop Evaluation of Text Generation [83.10599735938618]
Leaderboards have eased model development for many NLP datasets by standardizing their evaluation and delegating it to an independent external repository.
This work introduces GENIE, an human evaluation leaderboard, which brings the ease of leaderboards to text generation tasks.
arXiv Detail & Related papers (2021-01-17T00:40:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.