Factuality Challenges in the Era of Large Language Models
- URL: http://arxiv.org/abs/2310.05189v2
- Date: Tue, 10 Oct 2023 03:34:46 GMT
- Title: Factuality Challenges in the Era of Large Language Models
- Authors: Isabelle Augenstein, Timothy Baldwin, Meeyoung Cha, Tanmoy
Chakraborty, Giovanni Luca Ciampaglia, David Corney, Renee DiResta, Emilio
Ferrara, Scott Hale, Alon Halevy, Eduard Hovy, Heng Ji, Filippo Menczer,
Ruben Miguez, Preslav Nakov, Dietram Scheufele, Shivam Sharma, Giovanni Zagni
- Abstract summary: Large Language Models (LLMs) generate false, erroneous, or misleading content.
LLMs can be exploited for malicious applications.
This poses a significant challenge to society in terms of the potential deception of users.
- Score: 113.3282633305118
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The emergence of tools based on Large Language Models (LLMs), such as
OpenAI's ChatGPT, Microsoft's Bing Chat, and Google's Bard, has garnered
immense public attention. These incredibly useful, natural-sounding tools mark
significant advances in natural language generation, yet they exhibit a
propensity to generate false, erroneous, or misleading content -- commonly
referred to as "hallucinations." Moreover, LLMs can be exploited for malicious
applications, such as generating false but credible-sounding content and
profiles at scale. This poses a significant challenge to society in terms of
the potential deception of users and the increasing dissemination of inaccurate
information. In light of these risks, we explore the kinds of technological
innovations, regulatory reforms, and AI literacy initiatives needed from
fact-checkers, news organizations, and the broader research and policy
communities. By identifying the risks, the imminent threats, and some viable
solutions, we seek to shed light on navigating various aspects of veracity in
the era of generative AI.
Related papers
- Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Unique Security and Privacy Threats of Large Language Model: A Comprehensive Survey [46.19229410404056]
Large language models (LLMs) have made remarkable advancements in natural language processing.
These models are trained on vast datasets to exhibit powerful language understanding and generation capabilities.
Privacy and security issues have been revealed throughout their life cycle.
arXiv Detail & Related papers (2024-06-12T07:55:32Z) - Evaluating Robustness of Generative Search Engine on Adversarial Factual Questions [89.35345649303451]
Generative search engines have the potential to transform how people seek information online.
But generated responses from existing large language models (LLMs)-backed generative search engines may not always be accurate.
Retrieval-augmented generation exacerbates safety concerns, since adversaries may successfully evade the entire system.
arXiv Detail & Related papers (2024-02-25T11:22:19Z) - Fortifying Ethical Boundaries in AI: Advanced Strategies for Enhancing
Security in Large Language Models [3.9490749767170636]
Large language models (LLMs) have revolutionized text generation, translation, and question-answering tasks.
Despite their widespread use, LLMs present challenges such as ethical dilemmas when models are compelled to respond inappropriately.
This paper addresses these challenges by introducing a multi-pronged approach that includes: 1) filtering sensitive vocabulary from user input to prevent unethical responses; 2) detecting role-playing to halt interactions that could lead to 'prison break' scenarios; and 4) extending these methodologies to various LLM derivatives like Multi-Model Large Language Models (MLLMs)
arXiv Detail & Related papers (2024-01-27T08:09:33Z) - Insights into Classifying and Mitigating LLMs' Hallucinations [48.04565928175536]
This paper delves into the underlying causes of AI hallucination and elucidates its significance in artificial intelligence.
We explore potential strategies to mitigate hallucinations, aiming to enhance the overall reliability of large language models.
arXiv Detail & Related papers (2023-11-14T12:30:28Z) - On the Safety of Open-Sourced Large Language Models: Does Alignment
Really Prevent Them From Being Misused? [49.99955642001019]
We show that open-sourced, aligned large language models could be easily misguided to generate undesired content.
Our key idea is to directly manipulate the generation process of open-sourced LLMs to misguide it to generate undesired content.
arXiv Detail & Related papers (2023-10-02T19:22:01Z) - Amplifying Limitations, Harms and Risks of Large Language Models [1.0152838128195467]
We present this article as a small gesture in an attempt to counter what appears to be exponentially growing hype around Artificial Intelligence.
It may also help those outside of the field to become more informed about some of the limitations of AI technology.
arXiv Detail & Related papers (2023-07-06T11:53:45Z) - Countering Malicious Content Moderation Evasion in Online Social
Networks: Simulation and Detection of Word Camouflage [64.78260098263489]
Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems.
This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content.
arXiv Detail & Related papers (2022-12-27T16:08:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.