Understanding the Building Blocks of Accountability in Software
Engineering
- URL: http://arxiv.org/abs/2402.01926v1
- Date: Fri, 2 Feb 2024 21:53:35 GMT
- Title: Understanding the Building Blocks of Accountability in Software
Engineering
- Authors: Adam Alami and Neil Ernst
- Abstract summary: We investigate the factors that foster software engineers' individual accountability within their teams.
Our findings recognize two primary forms of accountability shaping software engineers individual senses of accountability: institutionalized and grassroots.
- Score: 3.521765725717803
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In the social and organizational sciences, accountability has been linked to
the efficient operation of organizations. However, it has received limited
attention in software engineering (SE) research, in spite of its central role
in the most popular software development methods (e.g., Scrum). In this
article, we explore the mechanisms of accountability in SE environments. We
investigate the factors that foster software engineers' individual
accountability within their teams through an interview study with 12 people.
Our findings recognize two primary forms of accountability shaping software
engineers individual senses of accountability: institutionalized and
grassroots. While the former is directed by formal processes and mechanisms,
like performance reviews, grassroots accountability arises organically within
teams, driven by factors such as peers' expectations and intrinsic motivation.
This organic form cultivates a shared sense of collective responsibility,
emanating from shared team standards and individual engineers' inner commitment
to their personal, professional values, and self-set standards. While
institutionalized accountability relies on traditional "carrot and stick"
approaches, such as financial incentives or denial of promotions, grassroots
accountability operates on reciprocity with peers and intrinsic motivations,
like maintaining one's reputation in the team.
Related papers
- Impostor Phenomenon as Human Debt: A Challenge to the Future of Software Engineering [46.44607910934403]
The Impostor Phenomenon (IP) impacts a significant portion of the Software Engineering workforce.<n>Similar to technical debt, Human Debt accumulates due to gaps in psychological safety and inclusive support within socio-technical ecosystems.
arXiv Detail & Related papers (2026-02-14T13:26:38Z) - Fundamentals of Building Autonomous LLM Agents [64.39018305018904]
This paper reviews the architecture and implementation methods of agents powered by large language models (LLMs)<n>The research aims to explore patterns to develop "agentic" LLMs that can automate complex tasks and bridge the performance gap with human capabilities.
arXiv Detail & Related papers (2025-10-10T10:32:39Z) - Accountability Framework for Healthcare AI Systems: Towards Joint Accountability in Decision Making [1.9774267722954466]
This paper bridges the gap between the ''what'' and ''how'' of AI accountability, specifically for AI systems in healthcare.<n>We do this by analysing the concept of accountability, formulating an accountability framework, and providing a three-tier structure for handling various accountability mechanisms.
arXiv Detail & Related papers (2025-09-03T13:05:29Z) - Truly Self-Improving Agents Require Intrinsic Metacognitive Learning [59.60803539959191]
Self-improving agents aim to continuously acquire new capabilities with minimal supervision.<n>Current approaches face two key limitations: their self-improvement processes are often rigid, fail to generalize across tasks domains, and struggle to scale with increasing agent capabilities.<n>We argue that effective self-improvement requires intrinsic metacognitive learning, defined as an agent's intrinsic ability to actively evaluate, reflect on, and adapt its own learning processes.
arXiv Detail & Related papers (2025-06-05T14:53:35Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.
Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Accountability in Code Review: The Role of Intrinsic Drivers and the Impact of LLMs [6.841710924733614]
Key intrinsic drivers of accountability for code quality are personal standards, professional integrity, pride in code quality, and maintaining one's reputation.
introduction of AI into software engineering must preserve social integrity and collective accountability mechanisms.
arXiv Detail & Related papers (2025-02-21T21:52:29Z) - ReVISE: Learning to Refine at Test-Time via Intrinsic Self-Verification [53.80183105328448]
Refine via Intrinsic Self-Verification (ReVISE) is an efficient framework that enables LLMs to self-correct their outputs through self-verification.
Our experiments on various reasoning tasks demonstrate that ReVISE achieves efficient self-correction and significantly improves reasoning performance.
arXiv Detail & Related papers (2025-02-20T13:50:02Z) - Responsible Generative AI Use by Product Managers: Recoupling Ethical Principles and Practices [0.657029444008632]
generative AI (genAI) has rapidly become integrated into workplaces.
In this paper, we examine how product managers implement responsible practices in their day-to-day work when using genAI.
arXiv Detail & Related papers (2025-01-27T22:10:27Z) - Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Generative AI in the Software Engineering Domain: Tensions of Occupational Identity and Patterns of Identity Protection [4.268049805078337]
We build on the theoretical lenses of occupational identity and self-determination theory to understand how and why software engineers make sense of generative artificial intelligence (GAI)
We find that engineers' sense-making is contingent on domain expertise, as juniors and seniors felt their needs for competence, autonomy, and relatedness to be differently impacted by GAI.
We propose design guidelines on how organizations and system designers can facilitate the impact of technological change on workers' occupational identity.
arXiv Detail & Related papers (2024-10-04T16:20:39Z) - Software Fairness Debt [0.5249805590164902]
This paper focuses on exploring the multifaceted nature of bias in software systems.
We identify the primary causes of fairness deficiency in software development and highlight their adverse effects on individuals and communities.
Our study contributes to a deeper understanding of fairness in software engineering and paves the way for the development of more equitable and socially responsible software systems.
arXiv Detail & Related papers (2024-05-03T21:45:48Z) - An Actionable Framework for Understanding and Improving Talent Retention
as a Competitive Advantage in IT Organizations [44.342141516382284]
This work presents an actionable framework for Talent Retention (TR) used in IT organizations.
Our framework encompasses a set of factors, contextual characteristics, barriers, strategies, and coping mechanisms.
Our findings indicated that software engineers can be differentiated from other professional groups.
arXiv Detail & Related papers (2024-02-02T17:08:14Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Towards CausalGPT: A Multi-Agent Approach for Faithful Knowledge Reasoning via Promoting Causal Consistency in LLMs [60.244412212130264]
Causal-Consistency Chain-of-Thought harnesses multi-agent collaboration to bolster the faithfulness and causality of foundation models.
Our framework demonstrates significant superiority over state-of-the-art methods through extensive and comprehensive evaluations.
arXiv Detail & Related papers (2023-08-23T04:59:21Z) - Understanding Self-Efficacy in the Context of Software Engineering: A
Qualitative Study in the Industry [2.268415020650315]
Self-efficacy is a concept researched in various areas of knowledge that impacts various factors such as performance, satisfaction, and motivation.
This study aims to understand the impact on the software development context with a focus on understanding the behavioral signs of self-efficacy.
arXiv Detail & Related papers (2023-05-26T17:16:37Z) - Autonomous Open-Ended Learning of Tasks with Non-Stationary
Interdependencies [64.0476282000118]
Intrinsic motivations have proven to generate a task-agnostic signal to properly allocate the training time amongst goals.
While the majority of works in the field of intrinsically motivated open-ended learning focus on scenarios where goals are independent from each other, only few of them studied the autonomous acquisition of interdependent tasks.
In particular, we first deepen the analysis of a previous system, showing the importance of incorporating information about the relationships between tasks at a higher level of the architecture.
Then we introduce H-GRAIL, a new system that extends the previous one by adding a new learning layer to store the autonomously acquired sequences
arXiv Detail & Related papers (2022-05-16T10:43:01Z) - The Good Shepherd: An Oracle Agent for Mechanism Design [6.226991885861965]
We propose an algorithm for constructing agents that perform well when evaluated over the learning trajectory of their adaptive co-players.
Our results show that our mechanisms are able to shepherd the participants strategies towards favorable outcomes.
arXiv Detail & Related papers (2022-02-21T11:28:09Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.