From Expert to Novice: An Empirical Study on Software Architecture Explanations
- URL: http://arxiv.org/abs/2503.08628v1
- Date: Tue, 11 Mar 2025 17:16:03 GMT
- Title: From Expert to Novice: An Empirical Study on Software Architecture Explanations
- Authors: Satrio Adi Rukmono, Filip Zamfirov, Lina Ochoa, Michel Chaudron,
- Abstract summary: Existing documentation often falls short due to issues like incompleteness and ambiguity.<n>This study investigates what constitutes a good explanation of software architecture through an empirical study.<n>It addresses five key areas: relevant architectural concerns, explanation plans, supporting artefacts, typical questions, and expectations.
- Score: 2.886678815326728
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The sharing of knowledge about software architecture is crucial in software development, particularly during the onboarding of new developers. However, existing documentation often falls short due to issues like incompleteness and ambiguity. Consequently, oral explanations are used for knowledge transfer. This study investigates what constitutes a good explanation of software architecture through an empirical study. It aims to explore how software architecture explanations are conducted, identify the main challenges, and suggest improvements. It addresses five key areas: relevant architectural concerns, explanation plans, supporting artefacts, typical questions, and expectations. An exploratory field study was conducted using semi-structured interviews with 17 software professionals, including 9 architecture explainers and 8 explainees. The study discovers that an explanation must balance both problem and technical domains while considering the explainee's role, experience, and the goal of the explanation. The concept of the explanation window, which adjusts the level of detail and scope, is introduced to address these variables. We also extend the Twin Peaks model to guide the interplay between problem and solution domains during architectural explanations by adding an emphasis to the context surrounding both domains. Future research should focus on developing better tools and processes to support architecture explanations.
Related papers
- Beyond Technocratic XAI: The Who, What & How in Explanation Design [35.987280553106565]
In practice, generating meaningful explanations is a context-dependent task.<n>This paper reframes explanation as a situated design process.<n>We propose a three-part framework for explanation design in XAI.
arXiv Detail & Related papers (2025-08-12T08:17:26Z) - A Survey on (M)LLM-Based GUI Agents [62.57899977018417]
Graphical User Interface (GUI) Agents have emerged as a transformative paradigm in human-computer interaction.<n>Recent advances in large language models and multimodal learning have revolutionized GUI automation across desktop, mobile, and web platforms.<n>This survey identifies key technical challenges, including accurate element localization, effective knowledge retrieval, long-horizon planning, and safety-aware execution control.
arXiv Detail & Related papers (2025-03-27T17:58:31Z) - Semi-Automated Design of Data-Intensive Architectures [49.1574468325115]
This paper introduces a development methodology for data-intensive architectures.
It guides architects in (i) designing a suitable architecture for their specific application scenario, and (ii) selecting an appropriate set of concrete systems to implement the application.
We show that the description languages we adopt can capture the key aspects of data-intensive architectures proposed by researchers and practitioners.
arXiv Detail & Related papers (2025-03-21T16:01:11Z) - Neurosymbolic Architectural Reasoning: Towards Formal Analysis through Neural Software Architecture Inference [4.023600998747813]
We outline neural architecture inference to solve the problem of having a formal architecture definition for subsequent symbolic reasoning over these architectures.
We discuss how this approach works in general and outline a research agenda based on six general research question that need to be addressed.
arXiv Detail & Related papers (2025-03-20T15:56:54Z) - Explainers' Mental Representations of Explainees' Needs in Everyday Explanations [0.0]
In explanations, explainers have mental representations of explainees' developing knowledge and shifting interests regarding the explanandum.
XAI should be able to react to explainees' needs in a similar manner.
This study investigated explainers' mental representations in everyday explanations of technological artifacts.
arXiv Detail & Related papers (2024-11-13T10:53:07Z) - SOK-Bench: A Situated Video Reasoning Benchmark with Aligned Open-World Knowledge [60.76719375410635]
We propose a new benchmark (SOK-Bench) consisting of 44K questions and 10K situations with instance-level annotations depicted in the videos.
The reasoning process is required to understand and apply situated knowledge and general knowledge for problem-solving.
We generate associated question-answer pairs and reasoning processes, finally followed by manual reviews for quality assurance.
arXiv Detail & Related papers (2024-05-15T21:55:31Z) - Towards a Framework for Evaluating Explanations in Automated Fact Verification [12.904145308839997]
As deep neural models in NLP become more complex, the necessity to interpret them becomes greater.
A burgeoning interest has emerged in rationalizing explanations to provide short and coherent justifications for predictions.
We advocate for a formal framework for key concepts and properties about rationalizing explanations to support their evaluation systematically.
arXiv Detail & Related papers (2024-03-29T17:50:28Z) - Architecture Knowledge Representation and Communication Industry Survey [0.0]
We aim to understand the current practice in architecture knowledge, and to explore where sustainability can be applied to address sustainability in software architecture in the future.
We used a survey, which utilized a questionnaire containing 34 questions and collected responses from 45 architects working at a prominent bank in the Netherlands.
arXiv Detail & Related papers (2023-09-20T18:17:16Z) - Adding Why to What? Analyses of an Everyday Explanation [0.0]
We investigated 20 game explanations using the theory as an analytical framework.
We found that explainers were focusing on the physical aspects of the game first (Architecture) and only later on aspects of the Relevance.
Shifting between addressing the two sides was justified by explanation goals, emerging misunderstandings, and the knowledge needs of the explainee.
arXiv Detail & Related papers (2023-08-08T11:17:22Z) - PyRCA: A Library for Metric-based Root Cause Analysis [66.72542200701807]
PyRCA is an open-source machine learning library of Root Cause Analysis (RCA) for Artificial Intelligence for IT Operations (AIOps)
It provides a holistic framework to uncover the complicated metric causal dependencies and automatically locate root causes of incidents.
arXiv Detail & Related papers (2023-06-20T09:55:10Z) - A Study of Documentation for Software Architecture [7.011803832284996]
We asked 65 participants to answer software architecture understanding questions.
Answers to questions that require applying and creating activities were statistically significantly associated with the use of the system's source code.
We conclude that, in the limited experimental context studied, our results contradict the hypothesis that the format of architectural documentation matters.
arXiv Detail & Related papers (2023-05-26T22:14:53Z) - Mining Architectural Information: A Systematic Mapping Study [7.3755596064775215]
There is a lack of clarity on what literature on mining architectural information is available.
We aim to identify, analyze, and synthesize the literature on mining architectural information.
arXiv Detail & Related papers (2022-12-26T14:57:38Z) - A Methodology and Software Architecture to Support
Explainability-by-Design [0.0]
This paper describes Explainability-by-Design, a holistic methodology characterised by proactive measures to include explanation capability in the design of decision-making systems.
The methodology consists of three phases: (A) Explanation Requirement Analysis, (B) Explanation Technical Design, and (C) Explanation Validation.
It was shown that the approach is tractable in terms of development time, which can be as low as two hours per sentence.
arXiv Detail & Related papers (2022-06-13T15:34:29Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Towards an Explanation Space to Align Humans and Explainable-AI Teamwork [0.0]
This paper proposes a formative architecture that defines the explanation space from a user-inspired perspective.
The architecture comprises five intertwined components to outline explanation requirements for a task.
We present the Abstracted Explanation Space, a modeling tool that aggregates the architecture's components to support designers.
arXiv Detail & Related papers (2021-06-02T23:17:29Z) - Understanding Deep Architectures with Reasoning Layer [60.90906477693774]
We show that properties of the algorithm layers, such as convergence, stability, and sensitivity, are intimately related to the approximation and generalization abilities of the end-to-end model.
Our theory can provide useful guidelines for designing deep architectures with reasoning layers.
arXiv Detail & Related papers (2020-06-24T00:26:35Z) - Survey on Visual Sentiment Analysis [87.20223213370004]
This paper reviews pertinent publications and tries to present an exhaustive overview of the field of Visual Sentiment Analysis.
The paper also describes principles of design of general Visual Sentiment Analysis systems from three main points of view.
A formalization of the problem is discussed, considering different levels of granularity, as well as the components that can affect the sentiment toward an image in different ways.
arXiv Detail & Related papers (2020-04-24T10:15:22Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.