Software Architecture Recovery with Information Fusion
- URL: http://arxiv.org/abs/2311.04643v1
- Date: Wed, 8 Nov 2023 12:35:37 GMT
- Title: Software Architecture Recovery with Information Fusion
- Authors: Yiran Zhang, Zhengzi Xu, Chengwei Liu, Hongxu Chen, Jianwen Sun, Dong
Qiu, Yang Liu
- Abstract summary: We propose SARIF, a fully automated architecture recovery technique.
It incorporates three types of comprehensive information, including dependencies, code text and folder structure.
SARIF is 36.1% more accurate than the best of the previous techniques on average.
- Score: 14.537490019685384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding the architecture is vital for effectively maintaining and
managing large software systems. However, as software systems evolve over time,
their architectures inevitably change. To keep up with the change, architects
need to track the implementation-level changes and update the architectural
documentation accordingly, which is time-consuming and error-prone. Therefore,
many automatic architecture recovery techniques have been proposed to ease this
process. Despite efforts have been made to improve the accuracy of architecture
recovery, existing solutions still suffer from two limitations. First, most of
them only use one or two type of information for the recovery, ignoring the
potential usefulness of other sources. Second, they tend to use the information
in a coarse-grained manner, overlooking important details within it. To address
these limitations, we propose SARIF, a fully automated architecture recovery
technique, which incorporates three types of comprehensive information,
including dependencies, code text and folder structure. SARIF can recover
architecture more accurately by thoroughly analyzing the details of each type
of information and adaptively fusing them based on their relevance and quality.
To evaluate SARIF, we collected six projects with published ground-truth
architectures and three open-source projects labeled by our industrial
collaborators. We compared SARIF with nine state-of-the-art techniques using
three commonly-used architecture similarity metrics and two new metrics. The
experimental results show that SARIF is 36.1% more accurate than the best of
the previous techniques on average. By providing comprehensive architecture,
SARIF can help users understand systems effectively and reduce the manual
effort of obtaining ground-truth architectures.
Related papers
- EM-DARTS: Hierarchical Differentiable Architecture Search for Eye Movement Recognition [54.99121380536659]
Eye movement biometrics have received increasing attention thanks to its high secure identification.
Deep learning (DL) models have been recently successfully applied for eye movement recognition.
DL architecture still is determined by human prior knowledge.
We propose EM-DARTS, a hierarchical differentiable architecture search algorithm to automatically design the DL architecture for eye movement recognition.
arXiv Detail & Related papers (2024-09-22T13:11:08Z) - Towards Living Software Architecture Diagrams [0.0]
We propose a tool that generates architectural diagrams for a software system by analyzing its software artifacts and unifying them into a comprehensive system representation.
This representation can be manually modified while ensuring that changes are reintegrated into the diagram when it is regenerated.
arXiv Detail & Related papers (2024-07-25T12:31:52Z) - How Do Users Revise Architectural Related Questions on Stack Overflow: An Empirical Study [6.723917667784222]
We conducted an empirical study to understand how users revise Architecture Related Questions (ARQs) on Stack Overflow (SO)
Our main findings are that:.
The revision of ARQs is not prevalent in SO, and an ARQ revision starts soon after this question is posted.
Both Question Creators (QCs) and non-QCs actively participate in ARQ revisions.
arXiv Detail & Related papers (2024-06-27T07:42:49Z) - Serving Deep Learning Model in Relational Databases [70.53282490832189]
Serving deep learning (DL) models on relational data has become a critical requirement across diverse commercial and scientific domains.
We highlight three pivotal paradigms: The state-of-the-art DL-centric architecture offloads DL computations to dedicated DL frameworks.
The potential UDF-centric architecture encapsulates one or more tensor computations into User Defined Functions (UDFs) within the relational database management system (RDBMS)
arXiv Detail & Related papers (2023-10-07T06:01:35Z) - Heterogeneous Continual Learning [88.53038822561197]
We propose a novel framework to tackle the continual learning (CL) problem with changing network architectures.
We build on top of the distillation family of techniques and modify it to a new setting where a weaker model takes the role of a teacher.
We also propose Quick Deep Inversion (QDI) to recover prior task visual features to support knowledge transfer.
arXiv Detail & Related papers (2023-06-14T15:54:42Z) - Visual Analysis of Neural Architecture Spaces for Summarizing Design
Principles [22.66053583920441]
ArchExplorer is a visual analysis method for understanding a neural architecture space and summarizing design principles.
A circle-packing-based architecture visualization has been developed to convey both the global relationships between clusters and local neighborhoods of the architectures in each cluster.
Two case studies and a post-analysis are presented to demonstrate the effectiveness of ArchExplorer in summarizing design principles and selecting better-performing architectures.
arXiv Detail & Related papers (2022-08-20T12:15:59Z) - Rethinking Architecture Selection in Differentiable NAS [74.61723678821049]
Differentiable Neural Architecture Search is one of the most popular NAS methods for its search efficiency and simplicity.
We propose an alternative perturbation-based architecture selection that directly measures each operation's influence on the supernet.
We find that several failure modes of DARTS can be greatly alleviated with the proposed selection method.
arXiv Detail & Related papers (2021-08-10T00:53:39Z) - Evolving Neural Architecture Using One Shot Model [5.188825486231326]
We propose a novel way of applying a simple genetic algorithm to the NAS problem called EvNAS (Evolving Neural Architecture using One Shot Model)
EvNAS searches for the architecture on the proxy dataset i.e. CIFAR-10 for 4.4 GPU day on a single GPU and achieves top-1 test error of 2.47%.
Results show the potential of evolutionary methods in solving the architecture search problem.
arXiv Detail & Related papers (2020-12-23T08:40:53Z) - Does Unsupervised Architecture Representation Learning Help Neural
Architecture Search? [22.63641173256389]
Existing Neural Architecture Search (NAS) methods either encode neural architectures using discrete encodings that do not scale well, or adopt supervised learning-based methods to jointly learn architecture representations and optimize architecture search on such representations which incurs search bias.
We observe that the structural properties of neural architectures are hard to preserve in the latent space if architecture representation learning and search are coupled, resulting in less effective search performance.
arXiv Detail & Related papers (2020-06-12T04:15:34Z) - Stage-Wise Neural Architecture Search [65.03109178056937]
Modern convolutional networks such as ResNet and NASNet have achieved state-of-the-art results in many computer vision applications.
These networks consist of stages, which are sets of layers that operate on representations in the same resolution.
It has been demonstrated that increasing the number of layers in each stage improves the prediction ability of the network.
However, the resulting architecture becomes computationally expensive in terms of floating point operations, memory requirements and inference time.
arXiv Detail & Related papers (2020-04-23T14:16:39Z) - RC-DARTS: Resource Constrained Differentiable Architecture Search [162.7199952019152]
We propose the resource constrained differentiable architecture search (RC-DARTS) method to learn architectures that are significantly smaller and faster.
We show that the RC-DARTS method learns lightweight neural architectures which have smaller model size and lower computational complexity.
arXiv Detail & Related papers (2019-12-30T05:02:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.