Measuring Interest Group Positions on Legislation: An AI-Driven Analysis of Lobbying Reports
- URL: http://arxiv.org/abs/2504.15333v1
- Date: Mon, 21 Apr 2025 17:54:47 GMT
- Title: Measuring Interest Group Positions on Legislation: An AI-Driven Analysis of Lobbying Reports
- Authors: Jiseon Kim, Dongkwan Kim, Joohye Jeong, Alice Oh, In Song Kim,
- Abstract summary: Special interest groups ( SIGs) in the U.S. participate in a range of political activities to influence policy decisions in the legislative and executive branches.<n>Despite the significance of understanding SIGs' policy positions, empirical challenges in observing them have led researchers to rely on indirect measurements.<n>This study introduces the first large-scale effort to directly measure and predict a wide range of bill positions.
- Score: 17.4092661362727
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Special interest groups (SIGs) in the U.S. participate in a range of political activities, such as lobbying and making campaign donations, to influence policy decisions in the legislative and executive branches. The competing interests of these SIGs have profound implications for global issues such as international trade policies, immigration, climate change, and global health challenges. Despite the significance of understanding SIGs' policy positions, empirical challenges in observing them have often led researchers to rely on indirect measurements or focus on a select few SIGs that publicly support or oppose a limited range of legislation. This study introduces the first large-scale effort to directly measure and predict a wide range of bill positions-Support, Oppose, Engage (Amend and Monitor)- across all legislative bills introduced from the 111th to the 117th Congresses. We leverage an advanced AI framework, including large language models (LLMs) and graph neural networks (GNNs), to develop a scalable pipeline that automatically extracts these positions from lobbying activities, resulting in a dataset of 42k bills annotated with 279k bill positions of 12k SIGs. With this large-scale dataset, we reveal (i) a strong correlation between a bill's progression through legislative process stages and the positions taken by interest groups, (ii) a significant relationship between firm size and lobbying positions, (iii) notable distinctions in lobbying position distribution based on bill subject, and (iv) heterogeneity in the distribution of policy preferences across industries. We introduce a novel framework for examining lobbying strategies and offer opportunities to explore how interest groups shape the political landscape.
Related papers
- Advancing Science- and Evidence-based AI Policy [163.43609502905707]
This paper tackles the problem of how to optimize the relationship between evidence and policy to address the opportunities and challenges of AI.<n>An increasing number of efforts address this problem by often either (i) contributing research into the risks of AI and their effective mitigation or (ii) advocating for policy to address these risks.
arXiv Detail & Related papers (2025-08-02T23:20:58Z) - Understanding support for AI regulation: A Bayesian network perspective [1.8434042562191815]
This study models public attitudes using Bayesian networks learned from the 2023 German survey Current Questions on AI.<n>The survey includes variables on AI interest, exposure, perceived threats and opportunities, awareness of EU regulation, and support for legal restrictions.<n>We show that awareness of regulation is driven by information-seeking behavior, while support for legal requirements depends strongly on perceived policy adequacy and political alignment.
arXiv Detail & Related papers (2025-07-08T10:47:10Z) - LegiGPT: Party Politics and Transport Policy with Large Language Model [0.0]
This study introduces a novel framework that integrates a large language model (LLM) with explainable artificial intelligence (XAI) to analyze transportation-related legislative proposals.<n>Political affiliations and sponsor characteristics were used to identify key factors shaping transportation policymaking.<n>Results revealed that the number and proportion of conservative and progressive sponsors, along with district size and electoral population, were critical determinants shaping legislative outcomes.
arXiv Detail & Related papers (2025-06-20T02:25:52Z) - Comparing Apples to Oranges: A Taxonomy for Navigating the Global Landscape of AI Regulation [0.0]
We present a taxonomy to map the global landscape of AI regulation.<n>We apply this framework to five early movers: the European Union's AI Act, the United States' Executive Order 14110, Canada's AI and Data Act, China's Interim Measures for Generative AI Services, and Brazil's AI Bill 2338/2023.
arXiv Detail & Related papers (2025-05-19T19:23:41Z) - Achieving Socio-Economic Parity through the Lens of EU AI Act [11.550643687258738]
Unfair treatment and discrimination are critical ethical concerns in AI systems.<n>The recent introduction of the EU AI Act establishes a unified legal framework to ensure legal certainty for AI innovation and investment.<n>We propose a novel fairness notion, Socio-Economic Parity (SEP), which incorporates Socio-Economic Status (SES) and promotes positive actions for underprivileged groups.
arXiv Detail & Related papers (2025-03-29T12:27:27Z) - Local Differences, Global Lessons: Insights from Organisation Policies for International Legislation [22.476305606415995]
This paper examines AI policies in two domains, news organisations and universities, to understand how bottom-up governance approaches shape AI usage and oversight.<n>We identify key areas of convergence and divergence in how organisations address risks such as bias, privacy, misinformation, and accountability.<n>We argue that lessons from domain-specific AI policies can contribute to more adaptive and effective AI governance at the global level.
arXiv Detail & Related papers (2025-02-19T15:59:09Z) - Local US officials' views on the impacts and governance of AI: Evidence from 2022 and 2023 survey waves [1.124958340749622]
This paper presents a survey of local US policymakers' views on the future impact and regulation of AI.<n>It provides insight into policymakers' expectations regarding the effects of AI on local communities and the nation.<n>It captures changes in attitudes following the release of ChatGPT and the subsequent surge in public awareness of AI.
arXiv Detail & Related papers (2025-01-16T15:25:58Z) - Leveraging Knowledge Graphs and LLMs to Support and Monitor Legislative Systems [0.0]
This work investigates how Legislative Knowledge Graphs and LLMs can synergize and support legislative processes.
To this aim, we develop Legis AI Platform, an interactive platform focused on Italian legislation that enhances the possibility of conducting legislative analysis.
arXiv Detail & Related papers (2024-09-20T06:21:03Z) - AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies [88.32153122712478]
We identify 314 unique risk categories organized into a four-tiered taxonomy.
At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks.
We aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
arXiv Detail & Related papers (2024-06-25T18:13:05Z) - A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law [65.87885628115946]
Large language models (LLMs) are revolutionizing the landscapes of finance, healthcare, and law.
We highlight the instrumental role of LLMs in enhancing diagnostic and treatment methodologies in healthcare, innovating financial analytics, and refining legal interpretation and compliance strategies.
We critically examine the ethics for LLM applications in these fields, pointing out the existing ethical concerns and the need for transparent, fair, and robust AI systems.
arXiv Detail & Related papers (2024-05-02T22:43:02Z) - Whose Side Are You On? Investigating the Political Stance of Large Language Models [56.883423489203786]
We investigate the political orientation of Large Language Models (LLMs) across a spectrum of eight polarizing topics.
Our investigation delves into the political alignment of LLMs across a spectrum of eight polarizing topics, spanning from abortion to LGBTQ issues.
The findings suggest that users should be mindful when crafting queries, and exercise caution in selecting neutral prompt language.
arXiv Detail & Related papers (2024-03-15T04:02:24Z) - Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity [1.9806397201363817]
This paper delves into the legal and regulatory implications of Generative AI and Large Language Models (LLMs) in the European Union context.
It analyzes aspects of liability, privacy, intellectual property, and cybersecurity.
It proposes recommendations to ensure the safe and compliant deployment of generative models.
arXiv Detail & Related papers (2024-01-14T19:16:29Z) - Regulation and NLP (RegNLP): Taming Large Language Models [51.41095330188972]
We argue how NLP research can benefit from proximity to regulatory studies and adjacent fields.
We advocate for the development of a new multidisciplinary research space on regulation and NLP.
arXiv Detail & Related papers (2023-10-09T09:22:40Z) - Wild Face Anti-Spoofing Challenge 2023: Benchmark and Results [73.98594459933008]
Face anti-spoofing (FAS) is an essential mechanism for safeguarding the integrity of automated face recognition systems.
This limitation can be attributed to the scarcity and lack of diversity in publicly available FAS datasets.
We introduce the Wild Face Anti-Spoofing dataset, a large-scale, diverse FAS dataset collected in unconstrained settings.
arXiv Detail & Related papers (2023-04-12T10:29:42Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.