The Influence of Cognitive Biases on Architectural Technical Debt
- URL: http://arxiv.org/abs/2309.14175v1
- Date: Mon, 25 Sep 2023 14:37:38 GMT
- Title: The Influence of Cognitive Biases on Architectural Technical Debt
- Authors: Klara Borowa, Andrzej Zalewski, Szymon Kijas
- Abstract summary: The results show which classes of architectural technical debt originate from cognitive biases.
We identify a set of debiasing techniques that can be used in order to prevent the negative influence of cognitive biases.
- Score: 0.9208007322096533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cognitive biases exert a significant influence on human thinking and
decision-making. In order to identify how they influence the occurrence of
architectural technical debt, a series of semi-structured interviews with
software architects was performed. The results show which classes of
architectural technical debt originate from cognitive biases, and reveal the
antecedents of technical debt items (classes) through biases. This way, we
analysed how and when cognitive biases lead to the creation of technical debt.
We also identified a set of debiasing techniques that can be used in order to
prevent the negative influence of cognitive biases. The observations of the
role of organisational culture in the avoidance of inadvertent technical debt
throw a new light on that issue.
Related papers
- Dynamic Programming Techniques for Enhancing Cognitive Representation in Knowledge Tracing [125.75923987618977]
We propose the Cognitive Representation Dynamic Programming based Knowledge Tracing (CRDP-KT) model.<n>It is a dynamic programming algorithm to optimize cognitive representations based on the difficulty of the questions and the performance intervals between them.<n>It provides more accurate and systematic input features for subsequent model training, thereby minimizing distortion in the simulation of cognitive states.
arXiv Detail & Related papers (2025-06-03T14:44:48Z) - Two Experts Are All You Need for Steering Thinking: Reinforcing Cognitive Effort in MoE Reasoning Models Without Additional Training [86.70255651945602]
We introduce a novel inference-time steering methodology called Reinforcing Cognitive Experts (RICE)<n>RICE aims to improve reasoning performance without additional training or complexs.<n> Empirical evaluations with leading MoE-based LRMs demonstrate noticeable and consistent improvements in reasoning accuracy, cognitive efficiency, and cross-domain generalization.
arXiv Detail & Related papers (2025-05-20T17:59:16Z) - Cognitive Debiasing Large Language Models for Decision-Making [71.2409973056137]
Large language models (LLMs) have shown potential in supporting decision-making applications.
We propose a cognitive debiasing approach, called self-debiasing, that enhances the reliability of LLMs.
Our method follows three sequential steps -- bias determination, bias analysis, and cognitive debiasing -- to iteratively mitigate potential cognitive biases in prompts.
arXiv Detail & Related papers (2025-04-05T11:23:05Z) - Bridging Social Psychology and LLM Reasoning: Conflict-Aware Meta-Review Generation via Cognitive Alignment [35.82355113500509]
Large language models (LLMs) show promise in automating manuscript critiques.
Existing methods fail to handle conflicting viewpoints within differing opinions.
We propose the Cognitive Alignment Framework (CAF), a dual-process architecture that transforms LLMs into adaptive scientific arbitrators.
arXiv Detail & Related papers (2025-03-18T04:13:11Z) - Debiasing Architectural Decision-Making: An Experiment With Students and Practitioners [2.9767565026354195]
This study was to design and evaluate a debiasing workshop with individuals at various stages of their professional careers.
We found that the workshop had a more substantial impact on practitioners.
We assume that the practitioners' attachment to their systems may be the cause of their susceptibility to biases.
arXiv Detail & Related papers (2025-02-06T12:12:53Z) - Exploring the Advances in Using Machine Learning to Identify Technical Debt and Self-Admitted Technical Debt [0.0]
This study seeks to provide a reflection on the current research landscape employing machine learning methods for detecting technical debt and self-admitted technical debt in software projects.
We performed a literature review of studies published up to 2024 that discuss technical debt and self-admitted technical debt identification using machine learning.
Our findings reveal the utilization of a diverse range of machine learning techniques, with BERT models proving significantly more effective than others.
arXiv Detail & Related papers (2024-09-06T23:58:10Z) - The Importance of Cognitive Biases in the Recommendation Ecosystem [8.267786874280848]
We argue that cognitive biases also manifest in different parts of the recommendation ecosystem and at different stages of the recommendation process.
We provide empirical evidence that biases such as feature-positive effect, Ikea effect, and cultural homophily can be observed in various components of the recommendation pipeline.
We advocate for a prejudice-free consideration of cognitive biases to improve user and item models as well as recommendation algorithms.
arXiv Detail & Related papers (2024-08-22T15:33:46Z) - Blind Spots and Biases: Exploring the Role of Annotator Cognitive Biases in NLP [3.9287497907611875]
There is growing concern over its potential to exacerbate existing biases and societal disparities.
This issue has prompted widespread attention from academia, policymakers, industry, and civil society.
Our research focuses on reviewing existing methodologies and ongoing investigations aimed at understanding annotation attributes that contribute to bias.
arXiv Detail & Related papers (2024-04-29T19:28:35Z) - Improving deep learning with prior knowledge and cognitive models: A
survey on enhancing explainability, adversarial robustness and zero-shot
learning [0.0]
We review current and emerging knowledge-informed and brain-inspired cognitive systems for realizing adversarial defenses.
Brain-inspired cognition methods use computational models that mimic the human mind to enhance intelligent behavior in artificial agents and autonomous robots.
arXiv Detail & Related papers (2024-03-11T18:11:00Z) - Instructed to Bias: Instruction-Tuned Language Models Exhibit Emergent Cognitive Bias [57.42417061979399]
Recent studies show that instruction tuning (IT) and reinforcement learning from human feedback (RLHF) improve the abilities of large language models (LMs) dramatically.
In this work, we investigate the effect of IT and RLHF on decision making and reasoning in LMs.
Our findings highlight the presence of these biases in various models from the GPT-3, Mistral, and T5 families.
arXiv Detail & Related papers (2023-08-01T01:39:25Z) - Causal Discovery with Language Models as Imperfect Experts [119.22928856942292]
We consider how expert knowledge can be used to improve the data-driven identification of causal graphs.
We propose strategies for amending such expert knowledge based on consistency properties.
We report a case study, on real data, where a large language model is used as an imperfect expert.
arXiv Detail & Related papers (2023-07-05T16:01:38Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Kernel Based Cognitive Architecture for Autonomous Agents [91.3755431537592]
This paper considers an evolutionary approach to creating a cognitive functionality.
We consider a cognitive architecture which ensures the evolution of the agent on the basis of Symbol Emergence Problem solution.
arXiv Detail & Related papers (2022-07-02T12:41:32Z) - Modeling Human Behavior Part II -- Cognitive approaches and Uncertainty [0.0]
In Part I, we discussed methods which generate a model of behavior from exploration of the system and feedback based on the exhibited behavior.
In this work, we will continue the discussion from the perspective of methods which focus on the assumed cognitive abilities, limitations, and biases demonstrated in human reasoning.
arXiv Detail & Related papers (2022-05-13T07:29:15Z) - On Heuristic Models, Assumptions, and Parameters [0.6445605125467574]
We argue that there is an underappreciated family of obscure and opaque technical caveats, choices, and qualifiers.<n>We describe three specific classes of such objects: models, assumptions, and parameters.
arXiv Detail & Related papers (2022-01-19T04:32:11Z) - Learning "What-if" Explanations for Sequential Decision-Making [92.8311073739295]
Building interpretable parameterizations of real-world decision-making on the basis of demonstrated behavior is essential.
We propose learning explanations of expert decisions by modeling their reward function in terms of preferences with respect to "what if" outcomes.
We highlight the effectiveness of our batch, counterfactual inverse reinforcement learning approach in recovering accurate and interpretable descriptions of behavior.
arXiv Detail & Related papers (2020-07-02T14:24:17Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.