Shared (Mis)Understandings and the Governance of AI: A Thematic Analysis of the 2023-2024 Oversight of AI Hearings
- URL: http://arxiv.org/abs/2603.03193v1
- Date: Tue, 03 Mar 2026 17:50:52 GMT
- Title: Shared (Mis)Understandings and the Governance of AI: A Thematic Analysis of the 2023-2024 Oversight of AI Hearings
- Authors: Rachel Leach,
- Abstract summary: I focus on the 2023-2024 Oversight of AI hearings held by the Senate Judiciary Committee's subcommittee on Privacy, Technology, and the Law.<n>I examine how participants, who overwhelmingly represent the technology industry, work to create narratives for understanding the past, present, and future impacts of AI.<n>By tracing industry influence over dominant understandings of the impacts of AI and the proper role of government, I examine the arrangements of power enacted and upheld through these hearings.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates early legislative deliberations over Artificial Intelligence in the United States through a thematic analysis of the 2023-2024 Oversight of AI hearings held by the Senate Judiciary Committee's subcommittee on Privacy, Technology, and the Law. I focus on these hearings as a site where participants draw from, and renegotiate, accustomed ways of thinking about technology and society. First, I examine how participants, who overwhelmingly represent the technology industry, work to create narratives for understanding the past, present, and future impacts of AI. Second, I examine how these narratives are invoked to argue for particular forms of AI governance, while casting alternative approaches as everything from infeasible to anti-American. By tracing industry influence over dominant understandings of the impacts of AI and the proper role of government, I examine the arrangements of power enacted and upheld through these hearings. In all, I ask: what role to shared (mis)understandings of AI play in early attempts at governing this technology?
Related papers
- AI Narrative Breakdown. A Critical Assessment of Power and Promise [0.0]
The article scrutinizes the pervasive narratives that are shaping the societal engagement with AI.<n>It highlights key themes such as agency and decision-making, truthfulness, autonomy, knowledge processing, prediction, general purpose, neutrality and objectivity.<n>The article calls for a more grounded engagement with AI and proposes new narratives recognizing AI as a human-directed tool necessarily subject to societal governance.
arXiv Detail & Related papers (2026-01-29T19:26:45Z) - Ethics through the Facets of Artificial Intelligence [0.0]
We argue that concerns stem from a blurred understanding of AI, how it can be used, and how it has been interpreted in society.<n>We propose a framework for the ethical assessment of the use of AI.
arXiv Detail & Related papers (2025-07-22T21:21:37Z) - Beware! The AI Act Can Also Apply to Your AI Research Practices [2.532202013576547]
The EU has become one of the vanguards in regulating the digital age.<n>The AI Act specifies -- due to a risk-based approach -- various obligations for providers of AI systems.<n>This position paper argues that, indeed, the AI Act's obligations could apply in many more cases than the AI community is aware of.
arXiv Detail & Related papers (2025-06-03T08:01:36Z) - Shaping AI's Impact on Billions of Lives [27.78474296888659]
We argue for the community of AI practitioners to consciously and proactively work for the common good.<n>This paper offers a blueprint for a new type of innovation infrastructure.
arXiv Detail & Related papers (2024-12-03T16:29:37Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Report of the 1st Workshop on Generative AI and Law [78.62063815165968]
This report presents the takeaways of the inaugural Workshop on Generative AI and Law (GenLaw)
A cross-disciplinary group of practitioners and scholars from computer science and law convened to discuss the technical, doctrinal, and policy challenges presented by law for Generative AI.
arXiv Detail & Related papers (2023-11-11T04:13:37Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Descriptive AI Ethics: Collecting and Understanding the Public Opinion [10.26464021472619]
This work proposes a mixed AI ethics model that allows normative and descriptive research to complement each other.
We discuss its implications on bridging the gap between optimistic and pessimistic views towards AI systems' deployment.
arXiv Detail & Related papers (2021-01-15T03:46:27Z) - The Short Anthropological Guide to the Study of Ethical AI [91.3755431537592]
Short guide serves as both an introduction to AI ethics and social science and anthropological perspectives on the development of AI.
Aims to provide those unfamiliar with the field with an insight into the societal impact of AI systems and how, in turn, these systems can lead us to rethink how our world operates.
arXiv Detail & Related papers (2020-10-07T12:25:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.