The Influence of Cognitive Biases on Architectural Technical Debt
- URL: http://arxiv.org/abs/2309.14175v1
- Date: Mon, 25 Sep 2023 14:37:38 GMT
- Title: The Influence of Cognitive Biases on Architectural Technical Debt
- Authors: Klara Borowa, Andrzej Zalewski, Szymon Kijas
- Abstract summary: The results show which classes of architectural technical debt originate from cognitive biases.
We identify a set of debiasing techniques that can be used in order to prevent the negative influence of cognitive biases.
- Score: 0.9208007322096533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cognitive biases exert a significant influence on human thinking and
decision-making. In order to identify how they influence the occurrence of
architectural technical debt, a series of semi-structured interviews with
software architects was performed. The results show which classes of
architectural technical debt originate from cognitive biases, and reveal the
antecedents of technical debt items (classes) through biases. This way, we
analysed how and when cognitive biases lead to the creation of technical debt.
We also identified a set of debiasing techniques that can be used in order to
prevent the negative influence of cognitive biases. The observations of the
role of organisational culture in the avoidance of inadvertent technical debt
throw a new light on that issue.
Related papers
- Debiasing Architectural Decision-Making: An Experiment With Students and Practitioners [2.9767565026354195]
This study was to design and evaluate a debiasing workshop with individuals at various stages of their professional careers.
We found that the workshop had a more substantial impact on practitioners.
We assume that the practitioners' attachment to their systems may be the cause of their susceptibility to biases.
arXiv Detail & Related papers (2025-02-06T12:12:53Z) - Exploring the Advances in Using Machine Learning to Identify Technical Debt and Self-Admitted Technical Debt [0.0]
This study seeks to provide a reflection on the current research landscape employing machine learning methods for detecting technical debt and self-admitted technical debt in software projects.
We performed a literature review of studies published up to 2024 that discuss technical debt and self-admitted technical debt identification using machine learning.
Our findings reveal the utilization of a diverse range of machine learning techniques, with BERT models proving significantly more effective than others.
arXiv Detail & Related papers (2024-09-06T23:58:10Z) - The Importance of Cognitive Biases in the Recommendation Ecosystem [8.267786874280848]
We argue that cognitive biases also manifest in different parts of the recommendation ecosystem and at different stages of the recommendation process.
We provide empirical evidence that biases such as feature-positive effect, Ikea effect, and cultural homophily can be observed in various components of the recommendation pipeline.
We advocate for a prejudice-free consideration of cognitive biases to improve user and item models as well as recommendation algorithms.
arXiv Detail & Related papers (2024-08-22T15:33:46Z) - Blind Spots and Biases: Exploring the Role of Annotator Cognitive Biases in NLP [3.9287497907611875]
There is growing concern over its potential to exacerbate existing biases and societal disparities.
This issue has prompted widespread attention from academia, policymakers, industry, and civil society.
Our research focuses on reviewing existing methodologies and ongoing investigations aimed at understanding annotation attributes that contribute to bias.
arXiv Detail & Related papers (2024-04-29T19:28:35Z) - Instructed to Bias: Instruction-Tuned Language Models Exhibit Emergent Cognitive Bias [57.42417061979399]
Recent studies show that instruction tuning (IT) and reinforcement learning from human feedback (RLHF) improve the abilities of large language models (LMs) dramatically.
In this work, we investigate the effect of IT and RLHF on decision making and reasoning in LMs.
Our findings highlight the presence of these biases in various models from the GPT-3, Mistral, and T5 families.
arXiv Detail & Related papers (2023-08-01T01:39:25Z) - Causal Discovery with Language Models as Imperfect Experts [119.22928856942292]
We consider how expert knowledge can be used to improve the data-driven identification of causal graphs.
We propose strategies for amending such expert knowledge based on consistency properties.
We report a case study, on real data, where a large language model is used as an imperfect expert.
arXiv Detail & Related papers (2023-07-05T16:01:38Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Kernel Based Cognitive Architecture for Autonomous Agents [91.3755431537592]
This paper considers an evolutionary approach to creating a cognitive functionality.
We consider a cognitive architecture which ensures the evolution of the agent on the basis of Symbol Emergence Problem solution.
arXiv Detail & Related papers (2022-07-02T12:41:32Z) - Modeling Human Behavior Part II -- Cognitive approaches and Uncertainty [0.0]
In Part I, we discussed methods which generate a model of behavior from exploration of the system and feedback based on the exhibited behavior.
In this work, we will continue the discussion from the perspective of methods which focus on the assumed cognitive abilities, limitations, and biases demonstrated in human reasoning.
arXiv Detail & Related papers (2022-05-13T07:29:15Z) - Learning "What-if" Explanations for Sequential Decision-Making [92.8311073739295]
Building interpretable parameterizations of real-world decision-making on the basis of demonstrated behavior is essential.
We propose learning explanations of expert decisions by modeling their reward function in terms of preferences with respect to "what if" outcomes.
We highlight the effectiveness of our batch, counterfactual inverse reinforcement learning approach in recovering accurate and interpretable descriptions of behavior.
arXiv Detail & Related papers (2020-07-02T14:24:17Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.