Large Language Models as Corporate Lobbyists
- URL: http://arxiv.org/abs/2301.01181v7
- Date: Sat, 28 Jan 2023 20:49:33 GMT
- Title: Large Language Models as Corporate Lobbyists
- Authors: John J. Nay
- Abstract summary: Autoregressive large language model determines if proposed U.S. Congressional bills are relevant to specific public companies.
For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We demonstrate a proof-of-concept of a large language model conducting
corporate lobbying related activities. An autoregressive large language model
(OpenAI's text-davinci-003) determines if proposed U.S. Congressional bills are
relevant to specific public companies and provides explanations and confidence
levels. For the bills the model deems as relevant, the model drafts a letter to
the sponsor of the bill in an attempt to persuade the congressperson to make
changes to the proposed legislation. We use hundreds of novel ground-truth
labels of the relevance of a bill to a company to benchmark the performance of
the model. It outperforms the baseline of predicting the most common outcome of
irrelevance. We also benchmark the performance of the previous OpenAI GPT-3
model (text-davinci-002), which was the state-of-the-art model on many academic
natural language tasks until text-davinci-003 was recently released. The
performance of text-davinci-002 is worse than the simple baseline. Longer-term,
if AI begins to influence law in a manner that is not a direct extension of
human intentions, this threatens the critical role that law as information
could play in aligning AI with humans. Initially, AI is being used to simply
augment human lobbyists for a small portion of their daily tasks. However,
firms have an incentive to use less and less human oversight over automated
assessments of policy ideas and the written communication to regulatory
agencies and Congressional staffers. The core question raised is where to draw
the line between human-driven and AI-driven policy influence.
Related papers
- The AI Pentad, the CHARME$^{2}$D Model, and an Assessment of Current-State AI Regulation [5.231576332164012]
This paper aims to establish a unifying model for AI regulation from the perspective of core AI components.
We first introduce the AI Pentad, which comprises the five essential components of AI.
We then review AI regulatory enablers, including AI registration and disclosure, AI monitoring, and AI enforcement mechanisms.
arXiv Detail & Related papers (2025-03-08T22:58:41Z) - Biased AI can Influence Political Decision-Making [64.9461133083473]
This paper presents two experiments investigating the effects of partisan bias in AI language models on political decision-making.
We found that participants exposed to politically biased models were significantly more likely to adopt opinions and make decisions aligning with the AI's bias.
arXiv Detail & Related papers (2024-10-08T22:56:00Z) - Why Companies "Democratise" Artificial Intelligence: The Case of Open Source Software Donations [0.0]
Companies claim to "democratise" artificial intelligence (AI) when they donate AI open source software (OSS) to non-profit foundations or release AI models.
This study employs a mixed-methods approach to investigate commercial incentives for 43 AI OSS donations to the Linux Foundation.
It contributes a taxonomy of both individual and organisational social, economic, and technological incentives for AI democratisation.
arXiv Detail & Related papers (2024-09-26T14:23:44Z) - Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.
First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint.
Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - Artificial Intelligence in Election Campaigns: Perceptions, Penalties, and Implications [44.99833362998488]
We identify three categories of AI use -- campaign operations, voter outreach, and deception.<n>While people generally dislike AI in campaigns, they are especially critical of deceptive uses, which they perceive as norm violations.<n>Deception AI use increases public support for stricter AI regulation, including calls for an outright ban on AI development.
arXiv Detail & Related papers (2024-08-08T12:58:20Z) - Towards Explainability in Legal Outcome Prediction Models [64.00172507827499]
We argue that precedent is a natural way of facilitating explainability for legal NLP models.
By developing a taxonomy of legal precedent, we are able to compare human judges and neural models.
We find that while the models learn to predict outcomes reasonably well, their use of precedent is unlike that of human judges.
arXiv Detail & Related papers (2024-03-25T15:15:41Z) - AI, write an essay for me: A large-scale comparison of human-written
versus ChatGPT-generated essays [66.36541161082856]
ChatGPT and similar generative AI models have attracted hundreds of millions of users.
This study compares human-written versus ChatGPT-generated argumentative student essays.
arXiv Detail & Related papers (2023-04-24T12:58:28Z) - The Role of Large Language Models in the Recognition of Territorial
Sovereignty: An Analysis of the Construction of Legitimacy [67.44950222243865]
We argue that technology tools like Google Maps and Large Language Models (LLM) are often perceived as impartial and objective.
We highlight the case of three controversial territories: Crimea, West Bank and Transnitria, by comparing the responses of ChatGPT against Wikipedia information and United Nations resolutions.
arXiv Detail & Related papers (2023-03-17T08:46:49Z) - Large Language Models as Fiduciaries: A Case Study Toward Robustly
Communicating With Artificial Intelligence Through Legal Standards [0.0]
Legal standards facilitate robust communication of inherently vague and underspecified goals.
Our research is an initial step toward a framework for evaluating AI understanding of legal standards more broadly.
arXiv Detail & Related papers (2023-01-24T16:03:20Z) - Law Informs Code: A Legal Informatics Approach to Aligning Artificial
Intelligence with Humans [0.0]
Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives.
"Law Informs Code" is the research agenda capturing complex computational legal processes, and embedding them in AI.
arXiv Detail & Related papers (2022-09-14T00:49:09Z) - Aligning Artificial Intelligence with Humans through Public Policy [0.0]
This essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks.
We believe this represents the "comprehension" phase of AI and policy, but leveraging policy as a key source of human values to align AI requires "understanding" policy.
arXiv Detail & Related papers (2022-06-25T21:31:14Z) - Truthful AI: Developing and governing AI that does not lie [0.26385121748044166]
Lying -- the use of verbal falsehoods to deceive -- is harmful.
While lying has traditionally been a human affair, AI systems are becoming increasingly prevalent.
This raises the question of how we should limit the harm caused by AI "lies"
arXiv Detail & Related papers (2021-10-13T12:18:09Z) - Lawformer: A Pre-trained Language Model for Chinese Legal Long Documents [56.40163943394202]
We release the Longformer-based pre-trained language model, named as Lawformer, for Chinese legal long documents understanding.
We evaluate Lawformer on a variety of LegalAI tasks, including judgment prediction, similar case retrieval, legal reading comprehension, and legal question answering.
arXiv Detail & Related papers (2021-05-09T09:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.