Traceable, Enforceable, and Compensable Participation: A Participation Ledger for People-Centered AI Governance
- URL: http://arxiv.org/abs/2602.10916v1
- Date: Wed, 11 Feb 2026 14:53:58 GMT
- Title: Traceable, Enforceable, and Compensable Participation: A Participation Ledger for People-Centered AI Governance
- Authors: Rashid Mushkani,
- Abstract summary: We introduce the Participation Ledger, a framework that operationalizes participation as traceable influence, enforceable authority, and compensable labor.<n>The ledger represents participation as an influence graph that links contributed artifacts to verified changes in AI systems.<n>It integrates three elements: a Participation Evidence Standard documenting consent, privacy, compensation, and reuse terms; an influence tracing mechanism that connects system updates to replayable before and after tests, enabling longitudinal monitoring of commitments; and encoded rights and incentives.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Participatory approaches are widely invoked in AI governance, yet participation rarely translates into durable influence. In public sector and civic AI systems, community contributions such as deliberations, annotations, prompts, and incident reports are often recorded informally, weakly linked to system updates, and disconnected from enforceable rights or sustained compensation. As a result, participation is frequently symbolic rather than accountable. We introduce the Participation Ledger, a machine readable and auditable framework that operationalizes participation as traceable influence, enforceable authority, and compensable labor. The ledger represents participation as an influence graph that links contributed artifacts to verified changes in AI systems, including datasets, prompts, adapters, policies, guardrails, and evaluation suites. It integrates three elements: a Participation Evidence Standard documenting consent, privacy, compensation, and reuse terms; an influence tracing mechanism that connects system updates to replayable before and after tests, enabling longitudinal monitoring of commitments; and encoded rights and incentives. Capability Vouchers allow authorized community stewards to request or constrain specific system capabilities within defined boundaries, while Participation Credits support ongoing recognition and compensation when contributed tests continue to provide value. We ground the framework in four urban AI and public space governance deployments and provide a machine readable schema, templates, and an evaluation plan for assessing traceability, enforceability, and compensation in practice.
Related papers
- Agents of Chaos [50.53354213047402]
We report an exploratory red-teaming study of autonomous language-model-powered agents deployed in a live laboratory environment.<n>Twenty AI researchers interacted with the agents under benign and adversarial conditions.<n>Our findings establish the existence of security-, privacy-, and governance-relevant vulnerabilities in realistic deployment settings.
arXiv Detail & Related papers (2026-02-23T16:28:48Z) - Preventing the Collapse of Peer Review Requires Verification-First AI [49.995126139461085]
We propose truth-coupling, i.e. how tightly venue scores track latent scientific truth.<n>We formalize two forces that drive a phase transition toward proxy-sovereign evaluation.
arXiv Detail & Related papers (2026-01-23T17:17:32Z) - A Unifying Human-Centered AI Fairness Framework [2.9385229328767988]
We introduce a unifying human-centered fairness framework that covers eight distinct fairness metrics.<n>Rather than privileging a single fairness notion, the framework enables stakeholders to assign weights across multiple fairness objectives.<n>We show that adjusting weights reveals nuanced trade-offs between different fairness metrics.
arXiv Detail & Related papers (2025-12-07T17:52:38Z) - Measuring What Matters: The AI Pluralism Index [0.0]
We present the AI Pluralism Index (AIPI), a transparent, evidence-based instrument that evaluates producers and system families across four pillars: participatory governance, inclusivity and diversity, transparency, and accountability.<n>The index aims to steer incentives toward pluralistic practice and to equip policymakers, procurers, and the public with comparable evidence.
arXiv Detail & Related papers (2025-10-09T13:19:34Z) - Partial Identification Approach to Counterfactual Fairness Assessment [50.88100567472179]
We introduce a Bayesian approach to bound unknown counterfactual fairness measures with high confidence.<n>Our results reveal a positive (spurious) effect on the COMPAS score when changing race to African-American (from all others) and a negative (direct causal) effect when transitioning from young to old age.
arXiv Detail & Related papers (2025-09-30T18:35:08Z) - Beyond Explainability: The Case for AI Validation [0.0]
We argue for a shift toward validation as a central regulatory pillar.<n> Validation, ensuring the reliability, consistency, and robustness of AI outputs, offers a more practical, scalable, and risk-sensitive alternative to explainability.<n>We propose a forward-looking policy framework centered on pre- and post-deployment validation, third-party auditing, harmonized standards, and liability incentives.
arXiv Detail & Related papers (2025-05-27T06:42:41Z) - Position: The AI Conference Peer Review Crisis Demands Author Feedback and Reviewer Rewards [2.8239108914343305]
This paper argues for the need to transform the traditional one-way review system into a bi-directional feedback loop.<n>Authors evaluate review quality and reviewers earn formal accreditation, creating an accountability framework.
arXiv Detail & Related papers (2025-05-08T05:51:48Z) - Towards Fair RAG: On the Impact of Fair Ranking in Retrieval-Augmented Generation [53.285436927963865]
This paper is the first systematic evaluation of RAG systems that integrate fairness-aware rankings.<n>We show that incorporating fairness-aware retrieval often maintains or even enhances both ranking quality and generation quality.
arXiv Detail & Related papers (2024-09-17T23:10:04Z) - Ask-AC: An Initiative Advisor-in-the-Loop Actor-Critic Framework [41.04606578479283]
We introduce a novel initiative advisor-in-the-loop actor-critic framework, termed as Ask-AC.
At the heart of Ask-AC are two complementary components, namely action requester and adaptive state selector.
Experimental results on both stationary and non-stationary environments demonstrate that the proposed framework significantly improves the learning efficiency of the agent.
arXiv Detail & Related papers (2022-07-05T10:58:11Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z) - A relationship and not a thing: A relational approach to algorithmic
accountability and assessment documentation [3.4438724671481755]
We argue that developers largely have a monopoly on information about how their systems actually work.
We argue that robust accountability regimes must establish opportunities for publics to cohere around shared experiences and interests.
arXiv Detail & Related papers (2022-03-02T23:22:03Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.