Can apparent bystanders distinctively shape an outcome? Global south
countries and global catastrophic risk-focused governance of artificial
intelligence
- URL: http://arxiv.org/abs/2312.04616v1
- Date: Thu, 7 Dec 2023 18:54:16 GMT
- Title: Can apparent bystanders distinctively shape an outcome? Global south
countries and global catastrophic risk-focused governance of artificial
intelligence
- Authors: Cecil Abungu, Michelle Malonza and Sumaya Nur Adan
- Abstract summary: We argue that global south countries like India and Singapore could be fairly consequential in the global catastrophic risk-focused governance of AI.
We also suggest some ways through which global south countries can play a positive role in designing, strengthening and operationalizing global catastrophic risk-focused AI governance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Increasingly, there is well-grounded concern that through perpetual
scaling-up of computation power and data, current deep learning techniques will
create highly capable artificial intelligence that could pursue goals in a
manner that is not aligned with human values. In turn, such AI could have the
potential of leading to a scenario in which there is serious global-scale
damage to human wellbeing. Against this backdrop, a number of researchers and
public policy professionals have been developing ideas about how to govern AI
in a manner that reduces the chances that it could lead to a global
catastrophe. The jurisdictional focus of a vast majority of their assessments
so far has been the United States, China, and Europe. That preference seems to
reveal an assumption underlying most of the work in this field: That global
south countries can only have a marginal role in attempts to govern AI
development from a global catastrophic risk -focused perspective. Our paper
sets out to undermine this assumption. We argue that global south countries
like India and Singapore (and specific coalitions) could in fact be fairly
consequential in the global catastrophic risk-focused governance of AI. We
support our position using 4 key claims. 3 are constructed out of the current
ways in which advanced foundational AI models are built and used while one is
constructed on the strategic roles that global south countries and coalitions
have historically played in the design and use of multilateral rules and
institutions. As each claim is elaborated, we also suggest some ways through
which global south countries can play a positive role in designing,
strengthening and operationalizing global catastrophic risk-focused AI
governance.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - US-China perspectives on extreme AI risks and global governance [0.0]
We sought to better understand how experts in each country describe safety and security threats from advanced artificial intelligence.
We focused our analysis on advanced forms of artificial intelligence, such as artificial general intelligence (AGI)
Experts in both countries expressed concern about risks from AGI, risks from intelligence explosions, and risks from AI systems that escape human control.
arXiv Detail & Related papers (2024-06-23T17:31:27Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Computing Power and the Governance of Artificial Intelligence [51.967584623262674]
Governments and companies have started to leverage compute as a means to govern AI.
compute-based policies and technologies have the potential to assist in these areas, but there is significant variation in their readiness for implementation.
naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.
arXiv Detail & Related papers (2024-02-13T21:10:21Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - An Overview of Catastrophic AI Risks [38.84933208563934]
This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories.
Malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs.
organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents.
rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans.
arXiv Detail & Related papers (2023-06-21T03:35:06Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Artificial Intelligence in the Global South (AI4D): Potential and Risks [0.0]
Artificial intelligence is becoming more widely available in all parts of the world.
This paper examines the key issues and questions arising in the emerging sub-field of AI for global development (AI4D)
We propose that although there are many risks associated with the use of AI, the potential benefits are enough to warrant detailed research and investigation of the most appropriate and effective ways to design, develop, implement, and use such technologies in the Global South.
arXiv Detail & Related papers (2021-08-23T11:48:31Z) - Artificial Intelligence Ethics: An Inclusive Global Discourse? [0.9208007322096533]
This research examines the growing body of documentation on AI ethics.
It seeks to discover if both countries in the Global South and women are underrepresented in this discourse.
Findings indicate a dearth of references to both of these themes in the AI ethics documents.
Without adequate input from both countries in the Global South and from women, such ethical frameworks and standards may be discriminatory.
arXiv Detail & Related papers (2021-08-23T06:08:00Z) - AI in the "Real World": Examining the Impact of AI Deployment in
Low-Resource Contexts [1.90365714903665]
This paper examines the deployment of AI by large industry labs situated in low-resource contexts.
It highlights factors impacting unanticipated deployments, and reflects on the state of AI deployment within the Global South.
arXiv Detail & Related papers (2020-11-28T01:49:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.