Opacity as a Feature, Not a Flaw: The LoBOX Governance Ethic for Role-Sensitive Explainability and Institutional Trust in AI
- URL: http://arxiv.org/abs/2505.20304v1
- Date: Sun, 18 May 2025 16:59:45 GMT
- Title: Opacity as a Feature, Not a Flaw: The LoBOX Governance Ethic for Role-Sensitive Explainability and Institutional Trust in AI
- Authors: Francisco Herrera, Reyes Calderón,
- Abstract summary: This paper introduces LoBOX (Lack of Belief: Opacity & eXplainability) governance ethic for managing artificial intelligence (AI) opacity.<n>Rather than treating opacity as a design flaw, LoBOX defines it as a condition that can be ethically governed through role-calibrated explanation and institutional accountability.
- Score: 9.696149761543573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces LoBOX (Lack of Belief: Opacity \& eXplainability) governance ethic structured framework for managing artificial intelligence (AI) opacity when full transparency is infeasible. Rather than treating opacity as a design flaw, LoBOX defines it as a condition that can be ethically governed through role-calibrated explanation and institutional accountability. The framework comprises a three-stage pathway: reduce accidental opacity, bound irreducible opacity, and delegate trust through structured oversight. Integrating the RED/BLUE XAI model for stakeholder-sensitive explanation and aligned with emerging legal instruments such as the EU AI Act, LoBOX offers a scalable and context-aware alternative to transparency-centric approaches. Reframe trust not as a function of complete system explainability, but as an outcome of institutional credibility, structured justification, and stakeholder-responsive accountability. A governance loop cycles back to ensure that LoBOX remains responsive to evolving technological contexts and stakeholder expectations, to ensure the complete opacity governance. We move from transparency ideals to ethical governance, emphasizing that trustworthiness in AI must be institutionally grounded and contextually justified. We also discuss how cultural or institutional trust varies in different contexts. This theoretical framework positions opacity not as a flaw but as a feature that must be actively governed to ensure responsible AI systems.
Related papers
- Ethical AI: Towards Defining a Collective Evaluation Framework [0.3413711585591077]
Artificial Intelligence (AI) is transforming sectors such as healthcare, finance, and autonomous systems.<n>Yet its rapid integration raises urgent ethical concerns related to data ownership, privacy, and systemic bias.<n>This article proposes a modular ethical assessment framework built on ontological blocks of meaning-discrete, interpretable units.
arXiv Detail & Related papers (2025-05-30T21:10:47Z) - Artificial Intelligence in Government: Why People Feel They Lose Control [44.99833362998488]
The use of Artificial Intelligence in public administration is expanding rapidly.<n>While AI promises greater efficiency and responsiveness, its integration into government functions raises concerns about fairness, transparency, and accountability.<n>This article applies principal-agent theory to AI adoption as a special case of delegation.
arXiv Detail & Related papers (2025-05-02T07:46:41Z) - All You Need for Counterfactual Explainability Is Principled and Reliable Estimate of Aleatoric and Epistemic Uncertainty [27.344785490275864]
We argue that transparency research overlooks many foundational concepts of artificial intelligence.<n>Inherently transparent models can benefit from human-centred explanatory insights.<n>At a higher level, integrating artificial intelligence fundamentals into transparency research promises to yield more reliable, robust and understandable predictive models.
arXiv Detail & Related papers (2025-02-24T09:38:31Z) - AI and the Transformation of Accountability and Discretion in Urban Governance [1.9152655229960793]
The study synthesizes insights to propose guiding principles for responsible AI integration in decision-making processes.<n>The analysis argues that AI does not simply restrict or enhance discretion but redistributes it across institutional levels.<n>It may simultaneously strengthen managerial oversight, enhance decision-making consistency, and improve operational efficiency.
arXiv Detail & Related papers (2025-02-18T18:11:39Z) - Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Accountability in Offline Reinforcement Learning: Explaining Decisions
with a Corpus of Examples [70.84093873437425]
This paper introduces the Accountable Offline Controller (AOC) that employs the offline dataset as the Decision Corpus.
AOC operates effectively in low-data scenarios, can be extended to the strictly offline imitation setting, and displays qualities of both conservation and adaptability.
We assess AOC's performance in both simulated and real-world healthcare scenarios, emphasizing its capability to manage offline control tasks with high levels of performance while maintaining accountability.
arXiv Detail & Related papers (2023-10-11T17:20:32Z) - Bridging the Transparency Gap: What Can Explainable AI Learn From the AI
Act? [0.8287206589886881]
European Union has introduced detailed requirements of transparency for AI systems.
There is a fundamental difference between XAI and the Act regarding what transparency is.
By comparing the disparate views of XAI and regulation, we arrive at four axes where practical work could bridge the transparency gap.
arXiv Detail & Related papers (2023-02-21T16:06:48Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Dimensions of Transparency in NLP Applications [64.16277166331298]
Broader transparency in descriptions of and communication regarding AI systems is widely considered desirable.
Previous work has suggested that a trade-off exists between greater system transparency and user confusion.
arXiv Detail & Related papers (2021-01-02T11:46:17Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.