The Future of Combating Rumors? Retrieval, Discrimination, and Generation
- URL: http://arxiv.org/abs/2403.20204v1
- Date: Fri, 29 Mar 2024 14:32:41 GMT
- Title: The Future of Combating Rumors? Retrieval, Discrimination, and Generation
- Authors: Junhao Xu, Longdi Xian, Zening Liu, Mingliang Chen, Qiuyang Yin, Fenghua Song,
- Abstract summary: Artificial Intelligence Generated Content (AIGC) technology development has facilitated the creation of rumors with misinformation.
Current rumor detection efforts fall short by merely labeling potentially misinformation.
Our proposed comprehensive debunking process not only detects rumors but also provides explanatory generated content to refute the authenticity of the information.
- Score: 5.096418029931098
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence Generated Content (AIGC) technology development has facilitated the creation of rumors with misinformation, impacting societal, economic, and political ecosystems, challenging democracy. Current rumor detection efforts fall short by merely labeling potentially misinformation (classification task), inadequately addressing the issue, and it is unrealistic to have authoritative institutions debunk every piece of information on social media. Our proposed comprehensive debunking process not only detects rumors but also provides explanatory generated content to refute the authenticity of the information. The Expert-Citizen Collective Wisdom (ECCW) module we designed aensures high-precision assessment of the credibility of information and the retrieval module is responsible for retrieving relevant knowledge from a Real-time updated debunking database based on information keywords. By using prompt engineering techniques, we feed results and knowledge into a LLM (Large Language Model), achieving satisfactory discrimination and explanatory effects while eliminating the need for fine-tuning, saving computational costs, and contributing to debunking efforts.
Related papers
- Missci: Reconstructing Fallacies in Misrepresented Science [84.32990746227385]
Health-related misinformation on social networks can lead to poor decision-making and real-world dangers.
Missci is a novel argumentation theoretical model for fallacious reasoning.
We present Missci as a dataset to test the critical reasoning abilities of large language models.
arXiv Detail & Related papers (2024-06-05T12:11:10Z) - Countering Misinformation via Emotional Response Generation [15.383062216223971]
proliferation of misinformation on social media platforms (SMPs) poses a significant danger to public health, social cohesion and democracy.
Previous research has shown how social correction can be an effective way to curb misinformation.
We present VerMouth, the first large-scale dataset comprising roughly 12 thousand claim-response pairs.
arXiv Detail & Related papers (2023-11-17T15:37:18Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Factuality Challenges in the Era of Large Language Models [113.3282633305118]
Large Language Models (LLMs) generate false, erroneous, or misleading content.
LLMs can be exploited for malicious applications.
This poses a significant challenge to society in terms of the potential deception of users.
arXiv Detail & Related papers (2023-10-08T14:55:02Z) - Reinforcement Learning-based Counter-Misinformation Response Generation:
A Case Study of COVID-19 Vaccine Misinformation [19.245814221211415]
Non-expert ordinary users act as eyes-on-the-ground who proactively counter misinformation.
In this work, we create two novel datasets of misinformation and counter-misinformation response pairs.
We propose MisinfoCorrect, a reinforcement learning-based framework that learns to generate counter-misinformation responses.
arXiv Detail & Related papers (2023-03-11T15:55:01Z) - Addressing contingency in algorithmic (mis)information classification:
Toward a responsible machine learning agenda [0.9659642285903421]
Data scientists need to take a stance on the objectivity, authoritativeness and legitimacy of the sources of truth" used for model training and testing.
Despite (and due to) their reported high accuracy and performance, ML-driven moderation systems have the potential to shape online public debate and create downstream negative impacts such as undue censorship and the reinforcing of false beliefs.
arXiv Detail & Related papers (2022-10-05T17:34:51Z) - Synthetic Disinformation Attacks on Automated Fact Verification Systems [53.011635547834025]
We explore the sensitivity of automated fact-checkers to synthetic adversarial evidence in two simulated settings.
We show that these systems suffer significant performance drops against these attacks.
We discuss the growing threat of modern NLG systems as generators of disinformation.
arXiv Detail & Related papers (2022-02-18T19:01:01Z) - Attacking Open-domain Question Answering by Injecting Misinformation [116.25434773461465]
We study the risk of misinformation to Question Answering (QA) models by investigating the sensitivity of open-domain QA models to misinformation documents.
Experiments show that QA models are vulnerable to even small amounts of evidence contamination brought by misinformation.
We discuss the necessity of building a misinformation-aware QA system that integrates question-answering and misinformation detection.
arXiv Detail & Related papers (2021-10-15T01:55:18Z) - An Agenda for Disinformation Research [3.083055913556838]
Disinformation erodes trust in the socio-political institutions that are the fundamental fabric of democracy.
The distribution of false, misleading, or inaccurate information with the intent to deceive is an existential threat to the United States.
New tools and approaches must be developed to leverage these affordances to understand and address this growing challenge.
arXiv Detail & Related papers (2020-12-15T19:32:36Z) - Sentimental LIAR: Extended Corpus and Deep Learning Models for Fake
Claim Classification [11.650381752104296]
This paper proposes a novel deep learning approach for automated detection of false short-text claims on social media.
We first introduce Sentimental LIAR, which extends the LIAR dataset of short claims by adding features based on sentiment and emotion analysis of claims.
Our results demonstrate that the proposed architecture trained on Sentimental LIAR can achieve an accuracy of 70%, which is an improvement of 30% over previously reported results for the LIAR benchmark.
arXiv Detail & Related papers (2020-09-01T02:48:11Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.