Deep Learning Models in Software Requirements Engineering
- URL: http://arxiv.org/abs/2105.07771v1
- Date: Mon, 17 May 2021 12:27:30 GMT
- Title: Deep Learning Models in Software Requirements Engineering
- Authors: Maria Naumcheva
- Abstract summary: We have applied the vanilla sentence autoencoder to the sentence generation task and evaluated its performance.
The generated sentences are not plausible English and contain only a few meaningful words.
We believe that applying the model to a larger dataset may produce significantly better results.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Requirements elicitation is an important phase of any software project: the
errors in requirements are more expensive to fix than the errors introduced at
later stages of software life cycle. Nevertheless, many projects do not devote
sufficient time to requirements. Automated requirements generation can improve
the quality of software projects. In this article we have accomplished the
first step of the research on this topic: we have applied the vanilla sentence
autoencoder to the sentence generation task and evaluated its performance. The
generated sentences are not plausible English and contain only a few meaningful
words. We believe that applying the model to a larger dataset may produce
significantly better results. Further research is needed to improve the quality
of generated data.
Related papers
- Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - Bias and Error Mitigation in Software-Generated Data: An Advanced Search
and Optimization Framework Leveraging Generative Code Models [0.0]
This paper proposes an advanced search and optimization framework aimed at generating and choosing optimal source code capable of correcting errors and biases from previous versions.
Applying this framework multiple times on the same software system would incrementally improve the quality of the output results.
arXiv Detail & Related papers (2023-10-17T19:31:05Z) - Requirements' Characteristics: How do they Impact on Project Budget in a
Systems Engineering Context? [3.2872885101161318]
Controlling and assuring the quality of natural language requirements (NLRs) is challenging.
We investigated with the Swedish Transportation Agency (STA) to what extent the characteristics of requirements had an influence on change requests and budget changes in the project.
arXiv Detail & Related papers (2023-10-02T17:53:54Z) - Modelling Concurrency Bugs Using Machine Learning [0.0]
This project aims to compare both common and recent machine learning approaches.
We define a synthetic dataset that we generate with the scope of simulating real-life (concurrent) programs.
We formulate hypotheses about fundamental limits of various machine learning model types.
arXiv Detail & Related papers (2023-05-08T17:30:24Z) - Machine Learning with Requirements: a Manifesto [114.97965827971132]
We argue that requirements definition and satisfaction can go a long way to make machine learning models even more fitting to the real world.
We show how the requirements specification can be fruitfully integrated into the standard machine learning development pipeline.
arXiv Detail & Related papers (2023-04-07T14:47:13Z) - Learnware: Small Models Do Big [69.88234743773113]
The prevailing big model paradigm, which has achieved impressive results in natural language processing and computer vision applications, has not yet addressed those issues, whereas becoming a serious source of carbon emissions.
This article offers an overview of the learnware paradigm, which attempts to enable users not need to build machine learning models from scratch, with the hope of reusing small models to do things even beyond their original purposes.
arXiv Detail & Related papers (2022-10-07T15:55:52Z) - BigIssue: A Realistic Bug Localization Benchmark [89.8240118116093]
BigIssue is a benchmark for realistic bug localization.
We provide a general benchmark with a diversity of real and synthetic Java bugs.
We hope to advance the state of the art in bug localization, in turn improving APR performance and increasing its applicability to the modern development cycle.
arXiv Detail & Related papers (2022-07-21T20:17:53Z) - CodeRL: Mastering Code Generation through Pretrained Models and Deep
Reinforcement Learning [92.36705236706678]
"CodeRL" is a new framework for program synthesis tasks through pretrained LMs and deep reinforcement learning.
During inference, we introduce a new generation procedure with a critical sampling strategy.
For the model backbones, we extended the encoder-decoder architecture of CodeT5 with enhanced learning objectives.
arXiv Detail & Related papers (2022-07-05T02:42:15Z) - Detecting Requirements Smells With Deep Learning: Experiences,
Challenges and Future Work [9.44316959798363]
This work aims to improve the previous work by creating a manually labeled dataset and using ensemble learning, Deep Learning (DL), and techniques such as word embeddings and transfer learning to overcome the generalization problem.
The current findings show that the dataset is unbalanced and which class examples should be added more.
arXiv Detail & Related papers (2021-08-06T12:45:15Z) - Measuring Coding Challenge Competence With APPS [54.22600767666257]
We introduce APPS, a benchmark for code generation.
Our benchmark includes 10,000 problems, which range from having simple one-line solutions to being substantial algorithmic challenges.
Recent models such as GPT-Neo can pass approximately 15% of the test cases of introductory problems.
arXiv Detail & Related papers (2021-05-20T17:58:42Z) - Software Effort Estimation using parameter tuned Models [1.9336815376402716]
The imprecision of the estimation is the reason for Project Failure.
The greatest pitfall of the software industry was the fast-changing nature of software development.
We need the development of useful models that accurately predict the cost of developing a software product.
arXiv Detail & Related papers (2020-08-25T15:18:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.