A Modality-level Explainable Framework for Misinformation Checking in
Social Networks
- URL: http://arxiv.org/abs/2212.04272v1
- Date: Thu, 8 Dec 2022 13:57:06 GMT
- Title: A Modality-level Explainable Framework for Misinformation Checking in
Social Networks
- Authors: V\'itor Louren\c{c}o and Aline Paes
- Abstract summary: This paper addresses automatic misinformation checking in social networks from a multimodal perspective.
Our framework comprises a misinformation classifier assisted by explainable methods to generate modality-oriented explainable inferences.
- Score: 2.4028383570062606
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The widespread of false information is a rising concern worldwide with
critical social impact, inspiring the emergence of fact-checking organizations
to mitigate misinformation dissemination. However, human-driven verification
leads to a time-consuming task and a bottleneck to have checked trustworthy
information at the same pace they emerge. Since misinformation relates not only
to the content itself but also to other social features, this paper addresses
automatic misinformation checking in social networks from a multimodal
perspective. Moreover, as simply naming a piece of news as incorrect may not
convince the citizen and, even worse, strengthen confirmation bias, the
proposal is a modality-level explainable-prone misinformation classifier
framework. Our framework comprises a misinformation classifier assisted by
explainable methods to generate modality-oriented explainable inferences.
Preliminary findings show that the misinformation classifier does benefit from
multimodal information encoding and the modality-oriented explainable mechanism
increases both inferences' interpretability and completeness.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Detecting misinformation through Framing Theory: the Frame Element-based
Model [6.4618518529384765]
We focus on the nuanced manipulation of narrative frames - an under-explored area within the AI community.
We propose an innovative approach leveraging the power of pre-trained Large Language Models and deep neural networks to detect misinformation.
arXiv Detail & Related papers (2024-02-19T21:50:42Z) - Interpretable Detection of Out-of-Context Misinformation with Neural-Symbolic-Enhanced Large Multimodal Model [16.348950072491697]
Misinformation creators now more tend to use out-of- multimedia contents to deceive the public and fake news detection systems.
This new type of misinformation increases the difficulty of not only detection but also clarification, because every individual modality is close enough to true information.
In this paper we explore how to achieve interpretable cross-modal de-contextualization detection that simultaneously identifies the mismatched pairs and the cross-modal contradictions.
arXiv Detail & Related papers (2023-04-15T21:11:55Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Applying Automatic Text Summarization for Fake News Detection [4.2177790395417745]
The distribution of fake news is not a new but a rapidly growing problem.
We present an approach to the problem that combines the power of transformer-based language models.
Our framework, CMTR-BERT, combines multiple text representations and enables the incorporation of contextual information.
arXiv Detail & Related papers (2022-04-04T21:00:55Z) - DISCO: Comprehensive and Explainable Disinformation Detection [71.5283511752544]
We propose a comprehensive and explainable disinformation detection framework called DISCO.
We demonstrate DISCO on a real-world fake news detection task with satisfactory detection accuracy and explanation.
We expect that our demo could pave the way for addressing the limitations of identification, comprehension, and explainability as a whole.
arXiv Detail & Related papers (2022-03-09T18:17:25Z) - An Agenda for Disinformation Research [3.083055913556838]
Disinformation erodes trust in the socio-political institutions that are the fundamental fabric of democracy.
The distribution of false, misleading, or inaccurate information with the intent to deceive is an existential threat to the United States.
New tools and approaches must be developed to leverage these affordances to understand and address this growing challenge.
arXiv Detail & Related papers (2020-12-15T19:32:36Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.