How Do Users Revise Architectural Related Questions on Stack Overflow: An Empirical Study
- URL: http://arxiv.org/abs/2406.18959v1
- Date: Thu, 27 Jun 2024 07:42:49 GMT
- Title: How Do Users Revise Architectural Related Questions on Stack Overflow: An Empirical Study
- Authors: Musengamana Jean de Dieu, Peng Liang, Mojtaba Shahin, Arif Ali Khan,
- Abstract summary: We conducted an empirical study to understand how users revise Architecture Related Questions (ARQs) on Stack Overflow (SO)
Our main findings are that:.
The revision of ARQs is not prevalent in SO, and an ARQ revision starts soon after this question is posted.
Both Question Creators (QCs) and non-QCs actively participate in ARQ revisions.
- Score: 6.723917667784222
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Technical Questions and Answers (Q&A) sites, such as Stack Overflow (SO), accumulate a significant variety of information related to software development in posts from users. To ensure the quality of this information, SO encourages its users to review posts through various mechanisms (e.g., question and answer revision processes). Although Architecture Related Posts (ARPs) communicate architectural information that has a system-wide impact on development, little is known about how SO users revise information shared in ARPs. To fill this gap, we conducted an empirical study to understand how users revise Architecture Related Questions (ARQs) on SO. We manually checked 13,205 ARPs and finally identified 4,114 ARQs that contain revision information. Our main findings are that: (1) The revision of ARQs is not prevalent in SO, and an ARQ revision starts soon after this question is posted (i.e., from 1 minute onward). Moreover, the revision of an ARQ occurs before and after this question receives its first answer/architecture solution, with most revisions beginning before the first architecture solution is posted. Both Question Creators (QCs) and non-QCs actively participate in ARQ revisions, with most revisions being made by QCs. (2) A variety of information (14 categories) is missing and further provided in ARQs after being posted, among which design context, component dependency, and architecture concern are dominant information. (3) Clarify the understanding of architecture under design and improve the readability of architecture problem are the two major purposes of the further provided information in ARQs. (4) The further provided information in ARQs has several impacts on the quality of answers/architecture solutions, including making architecture solution useful, making architecture solution informative, making architecture solution relevant, among others.
Related papers
- How Do OSS Developers Utilize Architectural Solutions from Q&A Sites: An Empirical Study [5.568316292260523]
Developers utilize programming-related knowledge (e.g., code snippets) on Q&A sites (e.g., Stack Overflow)
architectural solutions (e.g., architecture tactics) and their utilization are rarely explored.
For the mining study, we mined 984 commits and issues (i.e., 821 commits and 163 issues) from 893 Open-Source Software (OSS) projects on GitHub.
For the survey study, we surveyed 227 of them to further understand how practitioners utilize architectural solutions from Q&A sites in their OSS development.
arXiv Detail & Related papers (2024-04-07T18:53:30Z) - Software Architecture Recovery with Information Fusion [14.537490019685384]
We propose SARIF, a fully automated architecture recovery technique.
It incorporates three types of comprehensive information, including dependencies, code text and folder structure.
SARIF is 36.1% more accurate than the best of the previous techniques on average.
arXiv Detail & Related papers (2023-11-08T12:35:37Z) - RealTime QA: What's the Answer Right Now? [137.04039209995932]
We introduce REALTIME QA, a dynamic question answering (QA) platform that announces questions and evaluates systems on a regular basis.
We build strong baseline models upon large pretrained language models, including GPT-3 and T5.
GPT-3 tends to return outdated answers when retrieved documents do not provide sufficient information to find an answer.
arXiv Detail & Related papers (2022-07-27T07:26:01Z) - Multifaceted Improvements for Conversational Open-Domain Question
Answering [54.913313912927045]
We propose a framework with Multifaceted Improvements for Conversational open-domain Question Answering (MICQA)
Firstly, the proposed KL-divergence based regularization is able to lead to a better question understanding for retrieval and answer reading.
Second, the added post-ranker module can push more relevant passages to the top placements and be selected for reader with a two-aspect constrains.
Third, the well designed curriculum learning strategy effectively narrows the gap between the golden passage settings of training and inference, and encourages the reader to find true answer without the golden passage assistance.
arXiv Detail & Related papers (2022-04-01T07:54:27Z) - OpenQA: Hybrid QA System Relying on Structured Knowledge Base as well as
Non-structured Data [15.585969737147892]
We propose an intelligent question-answering system based on structured KB and unstructured data, called OpenQA.
We integrate KBQA structured question answering based on semantic parsing and deep representation learning, and two-stage unstructured question answering based on retrieval and neural machine reading comprehension into OpenQA.
arXiv Detail & Related papers (2021-12-31T09:15:39Z) - SYGMA: System for Generalizable Modular Question Answering OverKnowledge
Bases [57.89642289610301]
We present SYGMA, a modular approach facilitating general-izability across multiple knowledge bases and multiple rea-soning types.
We demonstrate effectiveness of our system by evaluating on datasets belonging to two distinct knowledge bases,DBpedia and Wikidata.
arXiv Detail & Related papers (2021-09-28T01:57:56Z) - NoiseQA: Challenge Set Evaluation for User-Centric Question Answering [68.67783808426292]
We show that components in the pipeline that precede an answering engine can introduce varied and considerable sources of error.
We conclude that there is substantial room for progress before QA systems can be effectively deployed.
arXiv Detail & Related papers (2021-02-16T18:35:29Z) - Retrieving and Reading: A Comprehensive Survey on Open-domain Question
Answering [62.88322725956294]
We review the latest research trends in OpenQA, with particular attention to systems that incorporate neural MRC techniques.
We introduce modern OpenQA architecture named Retriever-Reader'' and analyze the various systems that follow this architecture.
We then discuss key challenges to developing OpenQA systems and offer an analysis of benchmarks that are commonly used.
arXiv Detail & Related papers (2021-01-04T04:47:46Z) - A Survey on Complex Question Answering over Knowledge Base: Recent
Advances and Challenges [71.4531144086568]
Question Answering (QA) over Knowledge Base (KB) aims to automatically answer natural language questions.
Researchers have shifted their attention from simple questions to complex questions, which require more KB triples and constraint inference.
arXiv Detail & Related papers (2020-07-26T07:13:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.