Urban AI Governance Must Embed Legal Reasonableness for Democratic and Sustainable Cities
- URL: http://arxiv.org/abs/2508.12174v1
- Date: Sat, 16 Aug 2025 22:39:04 GMT
- Title: Urban AI Governance Must Embed Legal Reasonableness for Democratic and Sustainable Cities
- Authors: Rashid Mushkani,
- Abstract summary: This paper introduces the Urban Reasonableness Layer (URL), a conceptual framework that adapts the legal "reasonable person" standard for supervisory oversight in municipal AI systems.<n>We argue that embedding the legal "reasonable person" standard in municipal AI systems is essential for democratic and sustainable urban governance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This position paper argues that embedding the legal "reasonable person" standard in municipal AI systems is essential for democratic and sustainable urban governance. As cities increasingly deploy artificial intelligence (AI) systems, concerns around equity, accountability, and normative legitimacy are growing. This paper introduces the Urban Reasonableness Layer (URL), a conceptual framework that adapts the legal "reasonable person" standard for supervisory oversight in municipal AI systems, including potential future implementations of Artificial General Intelligence (AGI). Drawing on historical analogies, scenario mapping, and participatory norm-setting, we explore how legal and community-derived standards can inform AI decision-making in urban contexts. Rather than prescribing a fixed solution, the URL is proposed as an exploratory architecture for negotiating contested values, aligning automation with democratic processes, and interrogating the limits of technical alignment. Our key contributions include: (1) articulating the conceptual and operational architecture of the URL; (2) specifying participatory mechanisms for dynamic normative threshold-setting; (3) presenting a comparative scenario analysis of governance trajectories; and (4) outlining evaluation metrics and limitations. This work contributes to ongoing debates on urban AI governance by foregrounding pluralism, contestability, and the inherently political nature of socio-technical systems.
Related papers
- Pluralism in AI Governance: Toward Sociotechnical Alignment and Normative Coherence [0.16921396880325779]
The study synthesises frameworks including Full-Stack Alignment, Thick Models of Value, Value Sensitive Design, and Public Constitutional AI.<n>It introduces a layered framework linking values, mechanisms, and strategies, and maps tensions such as fairness versus efficiency, transparency versus security, and privacy versus equity.<n>The study contributes a holistic, value-sensitive model of AI governance, reframing regulation as a proactive mechanism for embedding public values into sociotechnical systems.
arXiv Detail & Related papers (2026-02-04T14:28:56Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - Deciding how to respond: A deliberative framework to guide policymaker responses to AI systems [0.0]
We argue that by operationalising the concept of freedom, a complementary approach can be developed.<n>The resulting framework is structured around coordinative, communicative and decision spaces.
arXiv Detail & Related papers (2025-08-05T17:25:14Z) - Resource Rational Contractualism Should Guide AI Alignment [69.07915246220985]
Contractualist alignment proposes grounding decisions in agreements that diverse stakeholders would endorse.<n>We propose Resource-Rationalism: a framework where AI systems approximate the agreements rational parties would form.<n>An RRC-aligned agent would not only operate efficiently, but also be equipped to dynamically adapt to and interpret the ever-changing human social world.
arXiv Detail & Related papers (2025-06-20T18:57:13Z) - Explainable AI Systems Must Be Contestable: Here's How to Make It Happen [2.5875936082584623]
This paper presents the first rigorous formal definition of contestability in explainable AI.<n>We introduce a modular framework of by-design and post-hoc mechanisms spanning human-centered interfaces, technical processes, and organizational architectures.<n>Our work equips practitioners with the tools to embed genuine recourse and accountability into AI systems.
arXiv Detail & Related papers (2025-06-02T13:32:05Z) - Bottom-Up Perspectives on AI Governance: Insights from User Reviews of AI Products [0.0]
This study adopts a bottom-up approach to explore how governance-relevant themes are expressed in user discourse.<n> Drawing on over 100,000 user reviews of AI products from G2.com, we apply BERTopic to extract latent themes and identify those most semantically related to AI governance.
arXiv Detail & Related papers (2025-05-30T01:33:21Z) - Let's have a chat with the EU AI Act [0.0]
This paper introduces an AI-driven self-assessment bot designed to assist users in navigating the European Union AI Act and related standards.<n>Leveraging a Retrieval-Augmented Generation framework, the bot retrieves relevant regulatory texts and provides tailored guidance.<n>The paper explores the bot's architecture, comparing naive and graph-based RAG models, and discusses its potential impact on AI governance.
arXiv Detail & Related papers (2025-05-17T10:24:08Z) - AI and the Transformation of Accountability and Discretion in Urban Governance [1.9152655229960793]
The study synthesizes insights to propose guiding principles for responsible AI integration in decision-making processes.<n>The analysis argues that AI does not simply restrict or enhance discretion but redistributes it across institutional levels.<n>It may simultaneously strengthen managerial oversight, enhance decision-making consistency, and improve operational efficiency.
arXiv Detail & Related papers (2025-02-18T18:11:39Z) - Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground [55.2480439325792]
I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms.<n>I question the current narrow prioritization in AI ethics of moral innovation over moral preservation.
arXiv Detail & Related papers (2024-12-06T15:36:13Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.