Vulnerabilities that arise from poor governance in Distributed Ledger Technologies
- URL: http://arxiv.org/abs/2409.15947v2
- Date: Tue, 13 May 2025 16:40:59 GMT
- Title: Vulnerabilities that arise from poor governance in Distributed Ledger Technologies
- Authors: Aida Manzano Kharman, William Sanders,
- Abstract summary: Distributed Ledger Technologies (DLTs) promise decentralization, transparency, and security, yet the reality often falls short due to fundamental governance flaws.<n>This paper surveys the state of DLT governance, identifies critical vulnerabilities, and highlights the absence of universally accepted best practices for good governance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Distributed Ledger Technologies (DLTs) promise decentralization, transparency, and security, yet the reality often falls short due to fundamental governance flaws. Poorly designed governance frameworks leave these systems vulnerable to coercion, vote-buying, centralization of power, and malicious protocol exploits: threats that undermine the very principles of fairness and equity these technologies seek to uphold. This paper surveys the state of DLT governance, identifies critical vulnerabilities, and highlights the absence of universally accepted best practices for good governance. By bridging insights from cryptography, social choice theory, and e-voting systems, we not only present a comprehensive taxonomy of governance properties essential for safeguarding DLTs but also point to technical solutions that can deliver these properties in practice. This work underscores the urgent need for robust, transparent, and enforceable governance mechanisms. Ensuring good governance is not merely a technical necessity but a societal imperative to protect the public interest, maintain trust, and realize the transformative potential of DLTs for social good.
Related papers
- Limits of Safe AI Deployment: Differentiating Oversight and Control [0.0]
Oversight and control (collectively, supervision) are often invoked as key levers for ensuring that AI systems are accountable, reliable, and able to fulfill governance and management requirements.<n>The concepts are frequently conflated or insufficiently distinguished in academic and policy discourse, undermining efforts to design or evaluate systems that should remain under meaningful human supervision.<n>This paper proposes a theoretically-informed yet policy-grounded framework that articulates the conditions under which each mechanism is possible, where they fall short, and what is required to make them meaningful in practice.
arXiv Detail & Related papers (2025-07-04T12:22:35Z) - LLM Agents Should Employ Security Principles [60.03651084139836]
This paper argues that the well-established design principles in information security should be employed when deploying Large Language Model (LLM) agents at scale.<n>We introduce AgentSandbox, a conceptual framework embedding these security principles to provide safeguards throughout an agent's life-cycle.
arXiv Detail & Related papers (2025-05-29T21:39:08Z) - Watermarking Without Standards Is Not AI Governance [46.71493672772134]
We argue that current implementations risk serving as symbolic compliance rather than delivering effective oversight.<n>We propose a three-layer framework encompassing technical standards, audit infrastructure, and enforcement mechanisms.
arXiv Detail & Related papers (2025-05-27T18:10:04Z) - Beyond Explainability: The Case for AI Validation [0.0]
We argue for a shift toward validation as a central regulatory pillar.<n> Validation, ensuring the reliability, consistency, and robustness of AI outputs, offers a more practical, scalable, and risk-sensitive alternative to explainability.<n>We propose a forward-looking policy framework centered on pre- and post-deployment validation, third-party auditing, harmonized standards, and liability incentives.
arXiv Detail & Related papers (2025-05-27T06:42:41Z) - Artificial Intelligence in Government: Why People Feel They Lose Control [44.99833362998488]
The use of Artificial Intelligence in public administration is expanding rapidly.<n>While AI promises greater efficiency and responsiveness, its integration into government functions raises concerns about fairness, transparency, and accountability.<n>This article applies principal-agent theory to AI adoption as a special case of delegation.
arXiv Detail & Related papers (2025-05-02T07:46:41Z) - Decentralized Vulnerability Disclosure via Permissioned Blockchain: A Secure, Transparent Alternative to Centralized CVE Management [0.0]
This paper proposes a decentralized, blockchain-based system for the publication of Common Vulnerabilities and Exposures ( CVEs)
The proposed architecture leverages a permissioned blockchain, wherein only authenticated CVE Numbering Authorities (CNAs) are authorized to submit entries.
We evaluate the proposed model in comparison with existing practices, highlighting its advantages in transparency, trust decentralization, and auditability.
arXiv Detail & Related papers (2025-05-01T12:12:08Z) - Towards Responsible Governing AI Proliferation [0.0]
The paper introduces the Proliferation' paradigm, which anticipates the rise of smaller, decentralized, open-sourced AI models.
It posits that these developments are both probable and likely to introduce both benefits and novel risks.
arXiv Detail & Related papers (2024-12-18T13:10:35Z) - Protocol Learning, Decentralized Frontier Risk and the No-Off Problem [56.74434512241989]
We identify a third paradigm - Protocol Learning - where models are trained across decentralized networks of incentivized participants.<n>This approach has the potential to aggregate orders of magnitude more computational resources than any single centralized entity.<n>It also introduces novel challenges: heterogeneous and unreliable nodes, malicious participants, the need for unextractable models to preserve incentives, and complex governance dynamics.
arXiv Detail & Related papers (2024-12-10T19:53:50Z) - Certified Safe: A Schematic for Approval Regulation of Frontier AI [0.0]
An approval regulation scheme is one in which a firm cannot legally market, or in some cases develop, a product without explicit approval from a regulator.
This report proposes an approval regulation schematic for only the largest AI projects in which scrutiny begins before training and continues through to post-deployment monitoring.
arXiv Detail & Related papers (2024-08-12T15:01:03Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - Decentralized Credential Status Management: A Paradigm Shift in Digital Trust [0.0]
Public key infrastructures are essential for Internet security, ensuring robust certificate management and revocation mechanisms.
The transition from centralized to decentralized systems presents challenges such as trust distribution and privacy-preserving credential management.
This paper explores the evolution of certificate status management from centralized to decentralized frameworks, focusing on blockchain technology and advanced cryptography.
arXiv Detail & Related papers (2024-06-17T13:17:56Z) - Generative AI Needs Adaptive Governance [0.0]
generative AI challenges the notions of governance, trust, and human agency.
This paper argues that generative AI calls for adaptive governance.
We outline actors, roles, as well as both shared and actors-specific policy activities.
arXiv Detail & Related papers (2024-06-06T23:47:14Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - Exploring the Relevance of Data Privacy-Enhancing Technologies for AI
Governance Use Cases [1.5293427903448022]
It is useful to view different AI governance objectives as a system of information flows.
The importance of interoperability between these different AI governance solutions becomes clear.
arXiv Detail & Related papers (2023-03-15T21:56:59Z) - Certification of Iterative Predictions in Bayesian Neural Networks [79.15007746660211]
We compute lower bounds for the probability that trajectories of the BNN model reach a given set of states while avoiding a set of unsafe states.
We use the lower bounds in the context of control and reinforcement learning to provide safety certification for given control policies.
arXiv Detail & Related papers (2021-05-21T05:23:57Z) - Enforcing robust control guarantees within neural network policies [76.00287474159973]
We propose a generic nonlinear control policy class, parameterized by neural networks, that enforces the same provable robustness criteria as robust control.
We demonstrate the power of this approach on several domains, improving in average-case performance over existing robust control methods and in worst-case stability over (non-robust) deep RL methods.
arXiv Detail & Related papers (2020-11-16T17:14:59Z) - Regulation conform DLT-operable payment adapter based on trustless -
justified trust combined generalized state channels [77.34726150561087]
Economy of Things (EoT) will be based on software agents running on peer-to-peer trustless networks.
We give an overview of current solutions that differ in their fundamental values and technological possibilities.
We propose to combine the strengths of the crypto based, decentralized trustless elements with established and well regulated means of payment.
arXiv Detail & Related papers (2020-07-03T10:45:55Z) - Graph Neural Networks for Decentralized Controllers [171.6642679604005]
Dynamical systems comprised of autonomous agents arise in many relevant problems such as robotics, smart grids, or smart cities.
Optimal centralized controllers are readily available but face limitations in terms of scalability and practical implementation.
We propose a framework using graph neural networks (GNNs) to learn decentralized controllers from data.
arXiv Detail & Related papers (2020-03-23T13:51:18Z) - Robust Deep Reinforcement Learning against Adversarial Perturbations on
State Observations [88.94162416324505]
A deep reinforcement learning (DRL) agent observes its states through observations, which may contain natural measurement errors or adversarial noises.
Since the observations deviate from the true states, they can mislead the agent into making suboptimal actions.
We show that naively applying existing techniques on improving robustness for classification tasks, like adversarial training, is ineffective for many RL tasks.
arXiv Detail & Related papers (2020-03-19T17:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.