Tracing Influence at Scale: A Contrastive Learning Approach to Linking
Public Comments and Regulator Responses
- URL: http://arxiv.org/abs/2311.14871v1
- Date: Fri, 24 Nov 2023 23:32:13 GMT
- Title: Tracing Influence at Scale: A Contrastive Learning Approach to Linking
Public Comments and Regulator Responses
- Authors: Linzi Xing, Brad Hackinen, Giuseppe Carenini
- Abstract summary: U.S. Federal Regulators receive over one million comment letters each year from businesses, interest groups, and members of the public, all advocating for changes to proposed regulations.
measuring the impact of specific comments is challenging because regulators are required to respond to comments but they do not have to specify which comments they are addressing.
We propose a simple yet effective solution by using an iterative contrastive method to train a neural model aiming for matching text from public comments to responses written by regulators.
- Score: 22.240224575601644
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: U.S. Federal Regulators receive over one million comment letters each year
from businesses, interest groups, and members of the public, all advocating for
changes to proposed regulations. These comments are believed to have
wide-ranging impacts on public policy. However, measuring the impact of
specific comments is challenging because regulators are required to respond to
comments but they do not have to specify which comments they are addressing. In
this paper, we propose a simple yet effective solution to this problem by using
an iterative contrastive method to train a neural model aiming for matching
text from public comments to responses written by regulators. We demonstrate
that our proposal substantially outperforms a set of selected text-matching
baselines on a human-annotated test set. Furthermore, it delivers performance
comparable to the most advanced gigantic language model (i.e., GPT-4), and is
more cost-effective when handling comments and regulator responses matching in
larger scale.
Related papers
- Demarked: A Strategy for Enhanced Abusive Speech Moderation through Counterspeech, Detoxification, and Message Management [71.99446449877038]
We propose a more comprehensive approach called Demarcation scoring abusive speech based on four aspect -- (i) severity scale; (ii) presence of a target; (iii) context scale; (iv) legal scale.
Our work aims to inform future strategies for effectively addressing abusive speech online.
arXiv Detail & Related papers (2024-06-27T21:45:33Z) - LLM-based Rewriting of Inappropriate Argumentation using Reinforcement Learning from Machine Feedback [16.57980268646285]
This paper studies how inappropriate language in arguments can be computationally mitigated.
We propose a reinforcement learning-based rewriting approach that balances content preservation and appropriateness.
We evaluate different weighting schemes for the reward function in both absolute and relative human assessment studies.
arXiv Detail & Related papers (2024-06-05T15:18:08Z) - Unintended Impacts of LLM Alignment on Global Representation [62.6579934112071]
We show that developers align Large Language Models (LLMs) to user preferences through a variety of procedures, such as Reinforcement Learning From Human Feedback (RLHF) and Direct Preference Optimization (DPO)
We explore how alignment impacts performance along three axes of global representation: English dialects, multilingualism, and opinions from and about countries worldwide.
We conclude by discussing design decisions that led to these unintended impacts and recommendations for more equitable preference tuning.
arXiv Detail & Related papers (2024-02-22T23:31:22Z) - Aligning Large Language Models by On-Policy Self-Judgment [49.31895979525054]
Existing approaches for aligning large language models with human preferences face a trade-off that requires a separate reward model (RM) for on-policy learning.
We present a novel alignment framework, SELF-JUDGE, that does on-policy learning and is parameter efficient.
We show that the rejecting sampling by itself can improve performance further without an additional evaluator.
arXiv Detail & Related papers (2024-02-17T11:25:26Z) - When Reviewers Lock Horn: Finding Disagreement in Scientific Peer
Reviews [24.875901048855077]
We introduce a novel task of automatically identifying contradictions among reviewers on a given article.
To the best of our knowledge, we make the first attempt to identify disagreements among peer reviewers automatically.
arXiv Detail & Related papers (2023-10-28T11:57:51Z) - Large Language Models are not Fair Evaluators [60.27164804083752]
We find that the quality ranking of candidate responses can be easily hacked by altering their order of appearance in the context.
This manipulation allows us to skew the evaluation result, making one model appear considerably superior to the other.
We propose a framework with three simple yet effective strategies to mitigate this issue.
arXiv Detail & Related papers (2023-05-29T07:41:03Z) - A User-Driven Framework for Regulating and Auditing Social Media [94.70018274127231]
We propose that algorithmic filtering should be regulated with respect to a flexible, user-driven baseline.
We require that the feeds a platform filters contain "similar" informational content as their respective baseline feeds.
We present an auditing procedure that checks whether a platform honors this requirement.
arXiv Detail & Related papers (2023-04-20T17:53:34Z) - A Large Scale Randomized Controlled Trial on Herding in Peer-Review
Discussions [33.261698377782075]
We aim to understand whether reviewers and more senior decision makers get disproportionately influenced by the first argument presented in a discussion.
Specifically, we design and execute a randomized controlled trial with the goal of testing for the conditional causal effect of the discussion initiator's opinion on the outcome of a paper.
arXiv Detail & Related papers (2020-11-30T18:23:07Z) - Deep Just-In-Time Inconsistency Detection Between Comments and Source
Code [51.00904399653609]
In this paper, we aim to detect whether a comment becomes inconsistent as a result of changes to the corresponding body of code.
We develop a deep-learning approach that learns to correlate a comment with code changes.
We show the usefulness of our approach by combining it with a comment update model to build a more comprehensive automatic comment maintenance system.
arXiv Detail & Related papers (2020-10-04T16:49:28Z) - Regulating algorithmic filtering on social media [14.873907857806357]
Social media platforms have the ability to influence users' perceptions and decisions, from their dining choices to their voting preferences.
Many calling for regulations on filtering algorithms, but designing and enforcing regulations remains challenging.
We find that there are conditions under which the regulation does not place a high performance cost on the platform.
arXiv Detail & Related papers (2020-06-17T04:14:20Z) - A Legal Approach to Hate Speech: Operationalizing the EU's Legal
Framework against the Expression of Hatred as an NLP Task [2.248133901806859]
We propose a 'legal approach' to hate speech detection by operationalization of the decision as to whether a post is subject to criminal law.
We show that, by breaking the legal assessment down into a series of simpler sub-decisions, even laypersons can annotate consistently.
arXiv Detail & Related papers (2020-04-07T14:13:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.