The Role of Large Language Models in the Recognition of Territorial
Sovereignty: An Analysis of the Construction of Legitimacy
- URL: http://arxiv.org/abs/2304.06030v2
- Date: Tue, 18 Apr 2023 15:14:41 GMT
- Title: The Role of Large Language Models in the Recognition of Territorial
Sovereignty: An Analysis of the Construction of Legitimacy
- Authors: Francisco Castillo-Eslava, Carlos Mougan, Alejandro Romero-Reche,
Steffen Staab
- Abstract summary: We argue that technology tools like Google Maps and Large Language Models (LLM) are often perceived as impartial and objective.
We highlight the case of three controversial territories: Crimea, West Bank and Transnitria, by comparing the responses of ChatGPT against Wikipedia information and United Nations resolutions.
- Score: 67.44950222243865
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We examine the potential impact of Large Language Models (LLM) on the
recognition of territorial sovereignty and its legitimization. We argue that
while technology tools, such as Google Maps and Large Language Models (LLM)
like OpenAI's ChatGPT, are often perceived as impartial and objective, this
perception is flawed, as AI algorithms reflect the biases of their designers or
the data they are built on. We also stress the importance of evaluating the
actions and decisions of AI and multinational companies that offer them, which
play a crucial role in aspects such as legitimizing and establishing ideas in
the collective imagination. Our paper highlights the case of three
controversial territories: Crimea, West Bank and Transnitria, by comparing the
responses of ChatGPT against Wikipedia information and United Nations
resolutions. We contend that the emergence of AI-based tools like LLMs is
leading to a new scenario in which emerging technology consolidates power and
influences our understanding of reality. Therefore, it is crucial to monitor
and analyze the role of AI in the construction of legitimacy and the
recognition of territorial sovereignty.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - The Journey to Trustworthy AI- Part 1: Pursuit of Pragmatic Frameworks [0.0]
This paper reviews Trustworthy Artificial Intelligence (TAI) and its various definitions.
We argue against using terms such as Responsible or Ethical AI as substitutes for TAI.
Instead, we advocate for approaches centered on addressing key attributes and properties such as fairness, bias, risk, security, explainability, and reliability.
arXiv Detail & Related papers (2024-03-19T08:27:04Z) - Responsible Artificial Intelligence: A Structured Literature Review [0.0]
The EU has recently issued several publications emphasizing the necessity of trust in AI.
This highlights the urgent need for international regulation.
This paper introduces a comprehensive and, to our knowledge, the first unified definition of responsible AI.
arXiv Detail & Related papers (2024-03-11T17:01:13Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Exploring Public Opinion on Responsible AI Through The Lens of Cultural
Consensus Theory [0.1813006808606333]
We applied Cultural Consensus Theory to a nationally representative survey dataset on various aspects of AI.
Our results offer valuable insights by identifying shared and contrasting views on responsible AI.
arXiv Detail & Related papers (2024-01-06T20:57:35Z) - Trust, Accountability, and Autonomy in Knowledge Graph-based AI for
Self-determination [1.4305544869388402]
Knowledge Graphs (KGs) have emerged as fundamental platforms for powering intelligent decision-making.
The integration of KGs with neuronal learning is currently a topic of active research.
This paper conceptualises the foundational topics and research pillars to support KG-based AI for self-determination.
arXiv Detail & Related papers (2023-10-30T12:51:52Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.