AI Language Models Could Both Help and Harm Equity in Marine
Policymaking: The Case Study of the BBNJ Question-Answering Bot
- URL: http://arxiv.org/abs/2403.01755v1
- Date: Mon, 4 Mar 2024 06:21:02 GMT
- Title: AI Language Models Could Both Help and Harm Equity in Marine
Policymaking: The Case Study of the BBNJ Question-Answering Bot
- Authors: Matt Ziegler, Sarah Lothian, Brian O'Neill, Richard Anderson,
Yoshitaka Ota
- Abstract summary: Large Language Models (LLMs) like ChatGPT are set to reshape some aspects of policymaking processes.
We are cautiously hopeful that LLMs could be used to promote a marginally more balanced footing among decision makers in policy negotiations.
However, the risks are particularly concerning for environmental and marine policy uses.
- Score: 3.643615070316831
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI Large Language Models (LLMs) like ChatGPT are set to reshape some aspects
of policymaking processes. Policy practitioners are already using ChatGPT for
help with a variety of tasks: from drafting statements, submissions, and
presentations, to conducting background research. We are cautiously hopeful
that LLMs could be used to promote a marginally more balanced footing among
decision makers in policy negotiations by assisting with certain tedious work,
particularly benefiting developing countries who face capacity constraints that
put them at a disadvantage in negotiations. However, the risks are particularly
concerning for environmental and marine policy uses, due to the urgency of
crises like climate change, high uncertainty, and trans-boundary impact.
To explore the realistic potentials, limitations, and equity risks for LLMs
in marine policymaking, we present a case study of an AI chatbot for the
recently adopted Biodiversity Beyond National Jurisdiction Agreement (BBNJ),
and critique its answers to key policy questions. Our case study demonstrates
the dangers of LLMs in marine policymaking via their potential bias towards
generating text that favors the perspectives of mainly Western economic centers
of power, while neglecting developing countries' viewpoints. We describe
several ways these biases can enter the system, including: (1) biases in the
underlying foundational language models; (2) biases arising from the chatbot's
connection to UN negotiation documents, and (3) biases arising from the
application design. We urge caution in the use of generative AI in ocean policy
processes and call for more research on its equity and fairness implications.
Our work also underscores the need for developing countries' policymakers to
develop the technical capacity to engage with AI on their own terms.
Related papers
- Economic Competition, EU Regulation, and Executive Orders: A Framework for Discussing AI Policy Implications in CS Courses [5.898240245765167]
We argue that discussions of the implications of AI policy are not yet present in the computer science curriculum.<n>We propose guiding questions to frame class discussions around AI policy in technical and non-technical (e.g., ethics) CS courses.
arXiv Detail & Related papers (2025-09-29T21:26:53Z) - What Would an LLM Do? Evaluating Policymaking Capabilities of Large Language Models [13.022045946656661]
This article evaluates whether large language models (LLMs) are aligned with domain experts to inform social policymaking on the subject of homelessness alleviation.<n>We develop a novel benchmark comprised of decision scenarios with policy choices across four geographies.<n>We present an automated pipeline that connects the benchmarked policies to an agent-based model, and we explore the social impact of the recommended policies through simulated social scenarios.
arXiv Detail & Related papers (2025-09-04T02:28:58Z) - Advancing Science- and Evidence-based AI Policy [163.43609502905707]
This paper tackles the problem of how to optimize the relationship between evidence and policy to address the opportunities and challenges of AI.<n>An increasing number of efforts address this problem by often either (i) contributing research into the risks of AI and their effective mitigation or (ii) advocating for policy to address these risks.
arXiv Detail & Related papers (2025-08-02T23:20:58Z) - The California Report on Frontier AI Policy [110.35302787349856]
Continued progress in frontier AI carries the potential for profound advances in scientific discovery, economic productivity, and broader social well-being.<n>As the epicenter of global AI innovation, California has a unique opportunity to continue supporting developments in frontier AI.<n>Report derives policy principles that can inform how California approaches the use, assessment, and governance of frontier AI.
arXiv Detail & Related papers (2025-06-17T23:33:21Z) - Persuasion and Safety in the Era of Generative AI [0.0]
The EU AI Act prohibits AI systems that use manipulative or deceptive techniques to undermine informed decision-making.<n>My dissertation addresses the lack of empirical studies in this area by developing a taxonomy of persuasive techniques.<n>It provides resources to mitigate the risks of persuasive AI and fosters discussions on ethical persuasion in the age of generative AI.
arXiv Detail & Related papers (2025-05-18T06:04:46Z) - Echoes of Power: Investigating Geopolitical Bias in US and China Large Language Models [2.1028463367241033]
We investigate the geopolitical biases in US and Chinese Large Language Models (LLMs)
Our findings show notable biases in both models, reflecting distinct ideological perspectives and cultural influences.
This study highlights the potential of LLMs to shape public discourse and underscores the importance of critically assessing AI-generated content.
arXiv Detail & Related papers (2025-03-20T19:53:10Z) - Political Neutrality in AI is Impossible- But Here is How to Approximate it [97.59456676216115]
We argue that true political neutrality is neither feasible nor universally desirable due to its subjective nature and the biases inherent in AI training data, algorithms, and user interactions.
We use the term "approximation" of political neutrality to shift the focus from unattainable absolutes to achievable, practical proxies.
arXiv Detail & Related papers (2025-02-18T16:48:04Z) - Exploring Potential Prompt Injection Attacks in Federated Military LLMs and Their Mitigation [3.0175628677371935]
Federated Learning (FL) is increasingly being adopted in military collaborations to develop Large Language Models (LLMs)
prompt injection attacks-malicious manipulations of input prompts-pose new threats that may undermine operational security, disrupt decision-making, and erode trust among allies.
We propose a human-AI collaborative framework that introduces both technical and policy countermeasures.
arXiv Detail & Related papers (2025-01-30T15:14:55Z) - Transparency, Security, and Workplace Training & Awareness in the Age of Generative AI [0.0]
As AI technologies advance, ethical considerations, transparency, data privacy, and their impact on human labor intersect with the drive for innovation and efficiency.
Our research explores publicly accessible large language models (LLMs) that often operate on the periphery, away from mainstream scrutiny.
Specifically, we examine Gab AI, a platform that centers around unrestricted communication and privacy, allowing users to interact freely without censorship.
arXiv Detail & Related papers (2024-12-19T17:40:58Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Assessing the State of AI Policy [0.5156484100374057]
This work provides an overview of AI legislation and directives at the international, U.S. state, city and federal levels.
It also reviews relevant business standards, and technical society initiatives.
arXiv Detail & Related papers (2024-07-31T16:09:25Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Regulation and NLP (RegNLP): Taming Large Language Models [51.41095330188972]
We argue how NLP research can benefit from proximity to regulatory studies and adjacent fields.
We advocate for the development of a new multidisciplinary research space on regulation and NLP.
arXiv Detail & Related papers (2023-10-09T09:22:40Z) - The Role of Large Language Models in the Recognition of Territorial
Sovereignty: An Analysis of the Construction of Legitimacy [67.44950222243865]
We argue that technology tools like Google Maps and Large Language Models (LLM) are often perceived as impartial and objective.
We highlight the case of three controversial territories: Crimea, West Bank and Transnitria, by comparing the responses of ChatGPT against Wikipedia information and United Nations resolutions.
arXiv Detail & Related papers (2023-03-17T08:46:49Z) - Envisioning a Human-AI collaborative system to transform policies into
decision models [7.9231719294492065]
We explore the enormous potential of AI to assist government agencies and policy experts in scaling the production of both human-readable and machine executable policy rules.
We present an initial emerging approach to shorten the route from policy documents to executable, interpretable and standardised decision models using AI, NLP and Knowledge Graphs.
Despite the many open domain challenges, in this position paper we explore the enormous potential of AI to assist government agencies and policy experts in scaling the production of both human-readable and machine executable policy rules.
arXiv Detail & Related papers (2022-11-01T18:29:48Z) - Aligning Artificial Intelligence with Humans through Public Policy [0.0]
This essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks.
We believe this represents the "comprehension" phase of AI and policy, but leveraging policy as a key source of human values to align AI requires "understanding" policy.
arXiv Detail & Related papers (2022-06-25T21:31:14Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Turbulence on the Global Economy influenced by Artificial Intelligence
and Foreign Policy Inefficiencies [8.00696326952901]
This paper seeks to find the bridge between artificial intelligence and its impact on the international policy implementation.
We propose a disposition for the essentials of AI-based foreign policy and implementation.
arXiv Detail & Related papers (2020-06-19T10:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.