Requirements Traceability: Recovering and Visualizing Traceability Links
Between Requirements and Source Code of Object-oriented Software Systems
- URL: http://arxiv.org/abs/2307.05188v1
- Date: Sun, 9 Jul 2023 11:01:16 GMT
- Title: Requirements Traceability: Recovering and Visualizing Traceability Links
Between Requirements and Source Code of Object-oriented Software Systems
- Authors: Ra'Fat Al-Msie'deen
- Abstract summary: Requirement-to-Code Traceability Links (RtC-TLs) shape the relations between requirement and source code artifacts.
This paper introduces YamenTrace, an automatic approach and implementation to recover and visualize RtC-TLs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Requirements traceability is an important activity to reach an effective
requirements management method in the requirements engineering.
Requirement-to-Code Traceability Links (RtC-TLs) shape the relations between
requirement and source code artifacts. RtC-TLs can assist engineers to know
which parts of software code implement a specific requirement. In addition,
these links can assist engineers to keep a correct mental model of software,
and decreasing the risk of code quality degradation when requirements change
with time mainly in large sized and complex software. However, manually
recovering and preserving of these TLs puts an additional burden on engineers
and is error-prone, tedious, and costly task. This paper introduces YamenTrace,
an automatic approach and implementation to recover and visualize RtC-TLs in
Object-Oriented software based on Latent Semantic Indexing (LSI) and Formal
Concept Analysis (FCA). The originality of YamenTrace is that it exploits all
code identifier names, comments, and relations in TLs recovery process.
YamenTrace uses LSI to find textual similarity across software code and
requirements. While FCA employs to cluster similar code and requirements
together. Furthermore, YamenTrace gives a visualization of recovered TLs. To
validate YamenTrace, it applied on three case studies. The findings of this
evaluation prove the importance and performance of YamenTrace proposal as most
of RtC-TLs were correctly recovered and visualized.
Related papers
- Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More? [54.667202878390526]
Long-context language models (LCLMs) have the potential to revolutionize our approach to tasks traditionally reliant on external tools like retrieval systems or databases.
We introduce LOFT, a benchmark of real-world tasks requiring context up to millions of tokens designed to evaluate LCLMs' performance on in-context retrieval and reasoning.
Our findings reveal LCLMs' surprising ability to rival state-of-the-art retrieval and RAG systems, despite never having been explicitly trained for these tasks.
arXiv Detail & Related papers (2024-06-19T00:28:58Z) - Natural Language Processing for Requirements Traceability [47.93107382627423]
Traceability plays a crucial role in requirements and software engineering, particularly for safety-critical systems.
Natural language processing (NLP) and related techniques have made considerable progress in the past decade.
arXiv Detail & Related papers (2024-05-17T15:17:00Z) - When Dataflow Analysis Meets Large Language Models [9.458251511218817]
This paper introduces LLMDFA, an LLM-powered dataflow analysis framework that analyzes arbitrary code snippets without requiring a compilation infrastructure.
Inspired by summary-based dataflow analysis, LLMDFA decomposes the problem into three sub-problems, which are effectively resolved by several essential strategies.
Our evaluation has shown that the design can mitigate the hallucination and improve the reasoning ability, obtaining high precision and recall in detecting dataflow-related bugs.
arXiv Detail & Related papers (2024-02-16T15:21:35Z) - Automating SBOM Generation with Zero-Shot Semantic Similarity [2.169562514302842]
A Software-Bill-of-Materials (SBOM) is a comprehensive inventory detailing a software application's components and dependencies.
We propose an automated method for generating SBOMs to prevent disastrous supply-chain attacks.
Our test results are compelling, demonstrating the model's strong performance in the zero-shot classification task.
arXiv Detail & Related papers (2024-02-03T18:14:13Z) - StepCoder: Improve Code Generation with Reinforcement Learning from
Compiler Feedback [58.20547418182074]
We introduce StepCoder, a novel framework for code generation, consisting of two main components.
CCCS addresses the exploration challenge by breaking the long sequences code generation task into a Curriculum of Code Completion Subtasks.
FGO only optimize the model by masking the unexecuted code segments to provide Fine-Grained Optimization.
Our method improves the ability to explore the output space and outperforms state-of-the-art approaches in corresponding benchmarks.
arXiv Detail & Related papers (2024-02-02T13:14:31Z) - Synthesizing Efficiently Monitorable Formulas in Metric Temporal Logic [4.60607942851373]
We consider the problem of automatically synthesizing formal specifications from system executions.
Most of the classical approaches for synthesizing temporal logic formulas aim at minimizing the size of the formula.
We formalize this notion and devise a learning algorithm that synthesizes concise formulas having bounded lookahead.
arXiv Detail & Related papers (2023-10-26T14:13:15Z) - Semantic Code Graph -- an information model to facilitate software
comprehension [0.0]
There is an increasing need to accelerate the code comprehension process to facilitate maintenance and reduce associated costs.
While a variety of code structure models already exist, there is a surprising lack of models that closely represent the source code.
We propose the Semantic Code Graph (SCG), an information model that offers a detailed abstract representation of code dependencies.
arXiv Detail & Related papers (2023-10-03T15:09:49Z) - Building Interpretable and Reliable Open Information Retriever for New
Domains Overnight [67.03842581848299]
Information retrieval is a critical component for many down-stream tasks such as open-domain question answering (QA)
We propose an information retrieval pipeline that uses entity/event linking model and query decomposition model to focus more accurately on different information units of the query.
We show that, while being more interpretable and reliable, our proposed pipeline significantly improves passage coverages and denotation accuracies across five IR and QA benchmarks.
arXiv Detail & Related papers (2023-08-09T07:47:17Z) - Understanding the Challenges of Deploying Live-Traceability Solutions [45.235173351109374]
SAFA.ai is a startup focusing on fine-tuning project-specific models that deliver automated traceability in a near real-time environment.
This paper describes the challenges that characterize commercializing software traceability and highlights possible future directions.
arXiv Detail & Related papers (2023-06-19T14:34:16Z) - Certified Reinforcement Learning with Logic Guidance [78.2286146954051]
We propose a model-free RL algorithm that enables the use of Linear Temporal Logic (LTL) to formulate a goal for unknown continuous-state/action Markov Decision Processes (MDPs)
The algorithm is guaranteed to synthesise a control policy whose traces satisfy the specification with maximal probability.
arXiv Detail & Related papers (2019-02-02T20:09:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.