Disentangling Active and Passive Cosponsorship in the U.S. Congress
- URL: http://arxiv.org/abs/2205.09674v1
- Date: Thu, 19 May 2022 16:33:46 GMT
- Title: Disentangling Active and Passive Cosponsorship in the U.S. Congress
- Authors: Giuseppe Russo, Christoph Gote, Laurence Brandenberger, Sophia
Schlosser, and Frank Schweitzer
- Abstract summary: In the U.S. Congress, legislators can use active and passive cosponsorship to support bills.
We show that these two types of cosponsorship are driven by two different motivations: the backing of political colleagues and the backing of the bill's content.
We develop anRGC+N based model that learns legislator representations from bill texts and speech transcripts.
- Score: 0.09236074230806579
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the U.S. Congress, legislators can use active and passive cosponsorship to
support bills. We show that these two types of cosponsorship are driven by two
different motivations: the backing of political colleagues and the backing of
the bill's content. To this end, we develop an Encoder+RGCN based model that
learns legislator representations from bill texts and speech transcripts. These
representations predict active and passive cosponsorship with an F1-score of
0.88. Applying our representations to predict voting decisions, we show that
they are interpretable and generalize to unseen tasks.
Related papers
- Toward a digital twin of U.S. Congress [31.41179786444486]
We introduce and provide descriptions of a daily-updated dataset that contains every Tweet from every U.S. congressperson during their respective terms.<n>We demonstrate that a modern language model equipped with congressperson-specific subsets of this data are capable of producing Tweets that are largely indistinguishable from actual Tweets posted by their physical counterparts.
arXiv Detail & Related papers (2025-04-04T21:33:36Z) - Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - Revisiting Multimodal Representation in Contrastive Learning: From Patch
and Token Embeddings to Finite Discrete Tokens [76.40196364163663]
We propose a learning-based vision-language pre-training approach, such as CLIP.
We show that our method can learn more comprehensive representations and capture meaningful cross-modal correspondence.
arXiv Detail & Related papers (2023-03-27T00:58:39Z) - Large Language Models as Corporate Lobbyists [0.0]
Autoregressive large language model determines if proposed U.S. Congressional bills are relevant to specific public companies.
For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation.
arXiv Detail & Related papers (2023-01-03T16:25:52Z) - Learning Action-Effect Dynamics for Hypothetical Vision-Language
Reasoning Task [50.72283841720014]
We propose a novel learning strategy that can improve reasoning about the effects of actions.
We demonstrate the effectiveness of our proposed approach and discuss its advantages over previous baselines in terms of performance, data efficiency, and generalization capability.
arXiv Detail & Related papers (2022-12-07T05:41:58Z) - DeepParliament: A Legal domain Benchmark & Dataset for Parliament Bills
Prediction [0.0]
This paper introduces DeepParliament, a legal domain Benchmark dataset that gathers bill documents and metadata.
We propose two new benchmarks: Binary and Multi-Class Bill Status classification.
This work will be the first to present a Parliament bill prediction task.
arXiv Detail & Related papers (2022-11-15T04:55:32Z) - The Ability of Self-Supervised Speech Models for Audio Representations [53.19715501273934]
Self-supervised learning (SSL) speech models have achieved unprecedented success in speech representation learning.
We conduct extensive experiments on abundant speech and non-speech audio datasets to evaluate the representation ability of state-of-the-art SSL speech models.
Results show that SSL speech models could extract meaningful features of a wide range of non-speech audio, while they may also fail on certain types of datasets.
arXiv Detail & Related papers (2022-09-26T15:21:06Z) - Retriever: Learning Content-Style Representation as a Token-Level
Bipartite Graph [89.52990975155579]
An unsupervised framework, named Retriever, is proposed to learn such representations.
Being modal-agnostic, the proposed Retriever is evaluated in both speech and image domains.
arXiv Detail & Related papers (2022-02-24T19:00:03Z) - Feature Engineering for US State Legislative Hearings: Stance,
Affiliation, Engagement and Absentees [0.8122270502556374]
We propose a system to automatically track the affiliation of organizations in public comments.
A metric to compute legislator engagement and absenteeism is also proposed.
arXiv Detail & Related papers (2021-09-18T06:50:35Z) - Predicting and Analyzing Law-Making in Kenya [0.012691047660244334]
We developed and trained machine learning models on a combination of features extracted from the bills to predict the outcome.
We observed that the texts in a bill are not as relevant as the year and month the bill was introduced and the category the bill belongs to.
arXiv Detail & Related papers (2020-06-09T20:21:50Z) - M2P2: Multimodal Persuasion Prediction using Adaptive Fusion [65.04045695380333]
This paper solves two problems: the Debate Outcome Prediction (DOP) problem predicts who wins a debate and the Intensity of Persuasion Prediction (IPP) problem predicts the change in the number of votes before and after a speaker speaks.
Our M2P2 framework is the first to use multimodal (acoustic, visual, language) data to solve the IPP problem.
arXiv Detail & Related papers (2020-06-03T18:47:24Z) - Which bills are lobbied? Predicting and interpreting lobbying activity
in the US [0.0]
We use lobbying data from OpenSecrets.org to predict if a piece of legislation (US bill) has been subjected to lobbying activities or not.
We also investigate the influence of the intensity of the lobbying activity on how discernible a lobbied bill is from one that was not subject to lobbying.
arXiv Detail & Related papers (2020-04-29T10:46:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.