Building Understandable Messaging for Policy and Evidence Review (BUMPER) with AI
- URL: http://arxiv.org/abs/2407.12812v1
- Date: Thu, 27 Jun 2024 05:03:03 GMT
- Title: Building Understandable Messaging for Policy and Evidence Review (BUMPER) with AI
- Authors: Katherine A. Rosenfeld, Maike Sonnewald, Sonia J. Jindal, Kevin A. McCarthy, Joshua L. Proctor,
- Abstract summary: We introduce a framework for the use of large language models (LLMs) in Building Understandable Messaging for Policy and Evidence Review (BUMPER)
LLMs are capable of providing interfaces for understanding and synthesizing large databases of diverse media.
We argue that this framework can facilitate accessibility of and confidence in scientific evidence for policymakers.
- Score: 0.3495246564946556
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a framework for the use of large language models (LLMs) in Building Understandable Messaging for Policy and Evidence Review (BUMPER). LLMs are proving capable of providing interfaces for understanding and synthesizing large databases of diverse media. This presents an exciting opportunity to supercharge the translation of scientific evidence into policy and action, thereby improving livelihoods around the world. However, these models also pose challenges related to access, trust-worthiness, and accountability. The BUMPER framework is built atop a scientific knowledge base (e.g., documentation, code, survey data) by the same scientists (e.g., individual contributor, lab, consortium). We focus on a solution that builds trustworthiness through transparency, scope-limiting, explicit-checks, and uncertainty measures. LLMs are rapidly being adopted and consequences are poorly understood. The framework addresses open questions regarding the reliability of LLMs and their use in high-stakes applications. We provide a worked example in health policy for a model designed to inform measles control programs. We argue that this framework can facilitate accessibility of and confidence in scientific evidence for policymakers, drive a focus on policy-relevance and translatability for researchers, and ultimately increase and accelerate the impact of scientific knowledge used for policy decisions.
Related papers
- Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Understanding the Interplay between Parametric and Contextual Knowledge for Large Language Models [85.13298925375692]
Large language models (LLMs) encode vast amounts of knowledge during pre-training.
LLMs can be enhanced by incorporating contextual knowledge (CK)
Can LLMs effectively integrate their internal PK with external CK to solve complex problems?
arXiv Detail & Related papers (2024-10-10T23:09:08Z) - Persuasion Games using Large Language Models [0.0]
Large Language Models (LLMs) have emerged as formidable instruments capable of comprehending and producing human-like text.
This paper explores the potential of LLMs, to shape user perspectives and subsequently influence their decisions on particular tasks.
This capability finds applications in diverse domains such as Investment, Credit cards and Insurance.
arXiv Detail & Related papers (2024-08-28T15:50:41Z) - Understanding the Relationship between Prompts and Response Uncertainty in Large Language Models [55.332004960574004]
Large language models (LLMs) are widely used in decision-making, but their reliability, especially in critical tasks like healthcare, is not well-established.
This paper investigates how the uncertainty of responses generated by LLMs relates to the information provided in the input prompt.
We propose a prompt-response concept model that explains how LLMs generate responses and helps understand the relationship between prompts and response uncertainty.
arXiv Detail & Related papers (2024-07-20T11:19:58Z) - Towards Trustworthy AI: A Review of Ethical and Robust Large Language Models [1.7466076090043157]
Large Language Models (LLMs) could transform many fields, but their fast development creates significant challenges for oversight, ethical creation, and building user trust.
This comprehensive review looks at key trust issues in LLMs, such as unintended harms, lack of transparency, vulnerability to attacks, alignment with human values, and environmental impact.
To tackle these issues, we suggest combining ethical oversight, industry accountability, regulation, and public involvement.
arXiv Detail & Related papers (2024-06-01T14:47:58Z) - CLAMBER: A Benchmark of Identifying and Clarifying Ambiguous Information Needs in Large Language Models [60.59638232596912]
We introduce CLAMBER, a benchmark for evaluating large language models (LLMs)
Building upon the taxonomy, we construct 12K high-quality data to assess the strengths, weaknesses, and potential risks of various off-the-shelf LLMs.
Our findings indicate the limited practical utility of current LLMs in identifying and clarifying ambiguous user queries.
arXiv Detail & Related papers (2024-05-20T14:34:01Z) - Can Reinforcement Learning support policy makers? A preliminary study
with Integrated Assessment Models [7.1307809008103735]
Integrated Assessment Models (IAMs) attempt to link main features of society and economy with the biosphere into one modelling framework.
This paper empirically demonstrates that modern Reinforcement Learning can be used to probe IAMs and explore the space of solutions in a more principled manner.
arXiv Detail & Related papers (2023-12-11T17:04:30Z) - RECALL: A Benchmark for LLMs Robustness against External Counterfactual
Knowledge [69.79676144482792]
This study aims to evaluate the ability of LLMs to distinguish reliable information from external knowledge.
Our benchmark consists of two tasks, Question Answering and Text Generation, and for each task, we provide models with a context containing counterfactual information.
arXiv Detail & Related papers (2023-11-14T13:24:19Z) - Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation [109.8527403904657]
We show that large language models (LLMs) possess unwavering confidence in their knowledge and cannot handle the conflict between internal and external knowledge well.
Retrieval augmentation proves to be an effective approach in enhancing LLMs' awareness of knowledge boundaries.
We propose a simple method to dynamically utilize supporting documents with our judgement strategy.
arXiv Detail & Related papers (2023-07-20T16:46:10Z) - When Giant Language Brains Just Aren't Enough! Domain Pizzazz with
Knowledge Sparkle Dust [15.484175299150904]
This paper presents an empirical analysis aimed at bridging the gap in adapting large language models to practical use cases.
We select the question answering (QA) task of insurance as a case study due to its challenge of reasoning.
Based on the task we design a new model relied on LLMs which are empowered by additional knowledge extracted from insurance policy rulebooks and DBPedia.
arXiv Detail & Related papers (2023-05-12T03:49:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.