Multiscale Governance
- URL: http://arxiv.org/abs/2104.02752v1
- Date: Tue, 6 Apr 2021 19:23:44 GMT
- Title: Multiscale Governance
- Authors: David Pastor-Escuredo and Philip Treleaven
- Abstract summary: Humandemics will propagate because of the pathways that connect the different systems.
The emerging fragility or robustness of the system will depend on how this complex network of systems is governed.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Future societal systems will be characterized by heterogeneous human
behaviors and also collective action. The interaction between local systems and
global systems will be complex. Humandemics will propagate because of the
pathways that connect the different systems and several invariant behaviors and
patterns that have emerged globally. On the contrary, infodemics of
misinformation can be a risk as it has occurred in the COVID-19 pandemic. The
emerging fragility or robustness of the system will depend on how this complex
network of systems is governed. Future societal systems will not be only
multiscale in terms of the social dimension, but also in the temporality.
Necessary and proper prevention and response systems based on complexity, ethic
and multi-scale governance will be required. Real-time response systems are the
basis for resilience to be the foundation of robust societies. A top-down
approach led by Governmental organs for managing humandemics is not sufficient
and may be only effective if policies are very restrictive and their efficacy
depends not only in the measures implemented but also on the dynamics of the
policies and the population perception and compliance. This top-down approach
is even weaker if there is not national and international coordination.
Coordinating top-down agencies with bottom-up constructs will be the design
principle. Multi-scale governance integrates decision-making processes with
signaling, sensing and leadership mechanisms to drive thriving societal systems
with real-time sensitivity.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Expansion of situations theory for exploring shared awareness in human-intelligent autonomous systems [0.0]
Intelligent autonomous systems' lack of shared situation awareness adversely influences team effectiveness in complex task environments.
A complementary approach of shared situation awareness, called situations theory, is beneficial for understanding the relationship between system of systems shared situation awareness and effectiveness.
arXiv Detail & Related papers (2024-06-07T14:21:01Z) - Fair Enough? A map of the current limitations of the requirements to have fair algorithms [43.609606707879365]
We argue that there is a hiatus between what the society is demanding from Automated Decision-Making systems, and what this demand actually means in real-world scenarios.
We outline the key features of such a hiatus and pinpoint a set of crucial open points that we as a society must address in order to give a concrete meaning to the increasing demand of fairness in Automated Decision-Making systems.
arXiv Detail & Related papers (2023-11-21T08:44:38Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Harms from Increasingly Agentic Algorithmic Systems [21.613581713046464]
Research in Fairness, Accountability, Transparency, and Ethics (FATE) has established many sources and forms of algorithmic harm.
Despite ongoing harms, new systems are being developed and deployed which threaten the perpetuation of the same harms.
arXiv Detail & Related papers (2023-02-20T21:42:41Z) - Systems Challenges for Trustworthy Embodied Systems [0.0]
A new generation of increasingly autonomous and self-learning systems, which we call embodied systems, is about to be developed.
It is crucial to coordinate the behavior of embodied systems in a beneficial manner, ensure their compatibility with our human-centered social values, and design verifiably safe and reliable human-machine interaction.
We are arguing that raditional systems engineering is coming to a climacteric from embedded to embodied systems, and with assuring the trustworthiness of dynamic federations of situationally aware, intent-driven, explorative, ever-evolving, largely non-predictable, and increasingly autonomous embodied systems in
arXiv Detail & Related papers (2022-01-10T15:52:17Z) - Beyond Robustness: A Taxonomy of Approaches towards Resilient
Multi-Robot Systems [41.71459547415086]
We analyze how resilience is achieved in networks of agents and multi-robot systems.
We argue that resilience must become a central engineering design consideration.
arXiv Detail & Related papers (2021-09-25T11:25:02Z) - On the Philosophical, Cognitive and Mathematical Foundations of
Symbiotic Autonomous Systems (SAS) [87.3520234553785]
Symbiotic Autonomous Systems (SAS) are advanced intelligent and cognitive systems exhibiting autonomous collective intelligence.
This work presents a theoretical framework of SAS underpinned by the latest advances in intelligence, cognition, computer, and system sciences.
arXiv Detail & Related papers (2021-02-11T05:44:25Z) - A game-theoretic analysis of networked system control for common-pool
resource management using multi-agent reinforcement learning [54.55119659523629]
Multi-agent reinforcement learning has recently shown great promise as an approach to networked system control.
Common-pool resources include arable land, fresh water, wetlands, wildlife, fish stock, forests and the atmosphere.
arXiv Detail & Related papers (2020-10-15T14:12:26Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.