Local US officials' views on the impacts and governance of AI: Evidence from 2022 and 2023 survey waves
- URL: http://arxiv.org/abs/2501.09606v1
- Date: Thu, 16 Jan 2025 15:25:58 GMT
- Title: Local US officials' views on the impacts and governance of AI: Evidence from 2022 and 2023 survey waves
- Authors: Sophia Hatz, Noemi Dreksler, Kevin Wei, Baobao Zhang,
- Abstract summary: This paper presents a survey of local US policymakers' views on the future impact and regulation of AI.<n>It provides insight into policymakers' expectations regarding the effects of AI on local communities and the nation.<n>It captures changes in attitudes following the release of ChatGPT and the subsequent surge in public awareness of AI.
- Score: 1.124958340749622
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a survey of local US policymakers' views on the future impact and regulation of AI. Our survey provides insight into US policymakers' expectations regarding the effects of AI on local communities and the nation, as well as their attitudes towards specific regulatory policies. Conducted in two waves (2022 and 2023), the survey captures changes in attitudes following the release of ChatGPT and the subsequent surge in public awareness of AI. Local policymakers express a mix of concern, optimism, and uncertainty about AI's impacts, anticipating significant societal risks such as increased surveillance, misinformation, and political polarization, alongside potential benefits in innovation and infrastructure. Many also report feeling underprepared and inadequately informed to make AI-related decisions. On regulation, a majority of policymakers support government oversight and favor specific policies addressing issues such as data privacy, AI-related unemployment, and AI safety and fairness. Democrats show stronger and more consistent support for regulation than Republicans, but the latter experienced a notable shift towards majority support between 2022 and 2023. Our study contributes to understanding the perspectives of local policymakers-key players in shaping state and federal AI legislation-by capturing evolving attitudes, partisan dynamics, and their implications for policy formation. The findings highlight the need for capacity-building initiatives and bi-partisan coordination to mitigate policy fragmentation and build a cohesive framework for AI governance in the US.
Related papers
- Advancing Science- and Evidence-based AI Policy [163.43609502905707]
This paper tackles the problem of how to optimize the relationship between evidence and policy to address the opportunities and challenges of AI.<n>An increasing number of efforts address this problem by often either (i) contributing research into the risks of AI and their effective mitigation or (ii) advocating for policy to address these risks.
arXiv Detail & Related papers (2025-08-02T23:20:58Z) - Understanding support for AI regulation: A Bayesian network perspective [1.8434042562191815]
This study models public attitudes using Bayesian networks learned from the 2023 German survey Current Questions on AI.<n>The survey includes variables on AI interest, exposure, perceived threats and opportunities, awareness of EU regulation, and support for legal restrictions.<n>We show that awareness of regulation is driven by information-seeking behavior, while support for legal requirements depends strongly on perceived policy adequacy and political alignment.
arXiv Detail & Related papers (2025-07-08T10:47:10Z) - Public Opinion and The Rise of Digital Minds: Perceived Risk, Trust, and Regulation Support [4.982210700018631]
This study examines how public trust in institutions and AI technologies, along with perceived risks, shape preferences for AI regulation.
Individuals with higher trust in government favor regulation, while those with greater trust in AI companies and AI technologies are less inclined to support restrictions.
arXiv Detail & Related papers (2025-04-30T17:56:23Z) - What do people expect from Artificial Intelligence? Public opinion on alignment in AI moderation from Germany and the United States [0.0]
We present evidence from two surveys of public preferences for key functional features of AI-enabled systems in Germany and the United States.
We examine support for four types of alignment in AI moderation: accuracy and reliability, safety, bias mitigation, and the promotion of aspirational imaginaries.
In both countries, accuracy and safety enjoy the strongest support, while more normatively charged goals -- like fairness and aspirational imaginaries -- receive more cautious backing.
arXiv Detail & Related papers (2025-04-16T20:27:03Z) - Ethical Implications of AI in Data Collection: Balancing Innovation with Privacy [0.0]
This article examines the ethical and legal implications of artificial intelligence (AI) driven data collection, focusing on developments from 2023 to 2024.
It compares regulatory approaches in the European Union, the United States, and China, highlighting the challenges in creating a globally harmonized framework for AI governance.
The article emphasizes the need for adaptive governance and international cooperation to address the global nature of AI development.
arXiv Detail & Related papers (2025-03-17T14:15:59Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.
Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Pitfalls of Evidence-Based AI Policy [13.370321579091387]
We argue that if the goal is evidence-based AI policy, the first regulatory objective must be to actively facilitate the process of identifying, studying, and deliberating about AI risks.
We discuss a set of 15 regulatory goals to facilitate this and show that Brazil, Canada, China, the EU, South Korea, the UK, and the USA all have substantial opportunities to adopt further evidence-seeking policies.
arXiv Detail & Related papers (2025-02-13T18:59:30Z) - How Do AI Companies "Fine-Tune" Policy? Examining Regulatory Capture in AI Governance [0.7252636622264104]
Industry actors in the United States have gained extensive influence about the regulation of general-purpose artificial intelligence (AI) systems.
Capture of AI policy by AI developers and deployers could hinder such regulatory goals as ensuring the safety, fairness, beneficence, transparency, or innovation of general-purpose AI systems.
Experts were primarily concerned with capture leading to a lack of AI regulation, weak regulation, or regulation that over-emphasizes certain policy goals over others.
arXiv Detail & Related papers (2024-10-16T21:06:54Z) - Aligning AI with Public Values: Deliberation and Decision-Making for Governing Multimodal LLMs in Political Video Analysis [48.14390493099495]
How AI models should deal with political topics has been discussed, but it remains challenging and requires better governance.<n>This paper examines the governance of large language models through individual and collective deliberation, focusing on politically sensitive videos.
arXiv Detail & Related papers (2024-09-15T03:17:38Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Towards a Privacy and Security-Aware Framework for Ethical AI: Guiding
the Development and Assessment of AI Systems [0.0]
This study conducts a systematic literature review spanning the years 2020 to 2023.
Through the synthesis of knowledge extracted from the SLR, this study presents a conceptual framework tailored for privacy- and security-aware AI systems.
arXiv Detail & Related papers (2024-03-13T15:39:57Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Worldwide AI Ethics: a review of 200 guidelines and recommendations for
AI governance [0.0]
This paper conducts a meta-analysis of 200 governance policies and ethical guidelines for AI usage published by public bodies, academic institutions, private companies, and civil society organizations worldwide.
We identify at least 17 resonating principles prevalent in the policies and guidelines of our dataset, released as an open-source database and tool.
We present the limitations of performing a global scale analysis study paired with a critical analysis of our findings, presenting areas of consensus that should be incorporated into future regulatory efforts.
arXiv Detail & Related papers (2022-06-23T18:03:04Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - AI Federalism: Shaping AI Policy within States in Germany [0.0]
Recent AI governance research has focused heavily on the analysis of strategy papers and ethics guidelines for AI published by national governments and international bodies.
Subnational institutions have also published documents on Artificial Intelligence, yet these have been largely absent from policy analyses.
This is surprising because AI is connected to many policy areas, such as economic or research policy, where the competences are already distributed between the national and subnational level.
arXiv Detail & Related papers (2021-10-28T16:06:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.