A multilevel framework for AI governance
- URL: http://arxiv.org/abs/2307.03198v2
- Date: Thu, 13 Jul 2023 00:27:55 GMT
- Title: A multilevel framework for AI governance
- Authors: Hyesun Choung, Prabu David, John S. Seberger
- Abstract summary: We propose a multilevel governance approach that involves governments, corporations, and citizens.
The levels of governance combined with the dimensions of trust in AI provide practical insights that can be used to further enhance user experiences and inform public policy related to AI.
- Score: 6.230751621285321
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To realize the potential benefits and mitigate potential risks of AI, it is
necessary to develop a framework of governance that conforms to ethics and
fundamental human values. Although several organizations have issued guidelines
and ethical frameworks for trustworthy AI, without a mediating governance
structure, these ethical principles will not translate into practice. In this
paper, we propose a multilevel governance approach that involves three groups
of interdependent stakeholders: governments, corporations, and citizens. We
examine their interrelationships through dimensions of trust, such as
competence, integrity, and benevolence. The levels of governance combined with
the dimensions of trust in AI provide practical insights that can be used to
further enhance user experiences and inform public policy related to AI.
Related papers
- AI and the Transformation of Accountability and Discretion in Urban Governance [1.9152655229960793]
The paper highlights AI's potential to reposition human discretion and reshape specific types of accountability.
It advances a framework for responsible AI adoption, ensuring that urban governance remains adaptive, transparent, and aligned with public values.
arXiv Detail & Related papers (2025-02-18T18:11:39Z) - Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground [55.2480439325792]
I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms.
I question the current narrow prioritization in AI ethics of moral innovation over moral preservation.
arXiv Detail & Related papers (2024-12-06T15:36:13Z) - From Principles to Practice: A Deep Dive into AI Ethics and Regulations [13.753819576072127]
The article thoroughly analyzes the ground-breaking AI regulatory framework proposed by the European Union.
Considering the technical efforts and strategies undertaken by academics and industry to uphold these principles, we explore the synergies and conflicts among the five ethical principles.
arXiv Detail & Related papers (2024-12-06T00:46:20Z) - Responsible AI Governance: A Response to UN Interim Report on Governing AI for Humanity [15.434533537570614]
The report emphasizes the transformative potential of AI in achieving the Sustainable Development Goals.
It acknowledges the need for robust governance to mitigate associated risks.
The report concludes with actionable principles for fostering responsible AI governance.
arXiv Detail & Related papers (2024-11-29T18:57:24Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Trustworthy AI: From Principles to Practices [44.67324097900778]
Many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc.
In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.
To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems.
arXiv Detail & Related papers (2021-10-04T03:20:39Z) - A Framework for Ethical AI at the United Nations [0.0]
This paper aims to provide an overview of the ethical concerns in artificial intelligence (AI) and the framework that is needed to mitigate those risks.
It suggests a practical path to ensure the development and use of AI at the United Nations (UN) aligns with our ethical values.
arXiv Detail & Related papers (2021-04-09T23:44:37Z) - The Sanction of Authority: Promoting Public Trust in AI [4.729969944853141]
We argue that public distrust of AI originates from the under-development of a regulatory ecosystem that would guarantee the trustworthiness of the AIs that pervade society.
We elaborate the pivotal role of externally auditable AI documentation within this model and the work to be done to ensure it is effective.
arXiv Detail & Related papers (2021-01-22T22:01:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.