Linked Credibility Reviews for Explainable Misinformation Detection
- URL: http://arxiv.org/abs/2008.12742v1
- Date: Fri, 28 Aug 2020 16:55:43 GMT
- Title: Linked Credibility Reviews for Explainable Misinformation Detection
- Authors: Ronald Denaux and Jose Manuel Gomez-Perez
- Abstract summary: We propose an architecture based on a core concept of Credibility Reviews (CRs) that can be used to build networks of distributed bots that collaborate for misinformation detection.
CRs serve as building blocks to compose graphs of (i) web content, (ii) existing credibility signals --fact-checked claims and reputation reviews of websites--, and (iii) automatically computed reviews.
We implement this architecture on top of lightweight extensions to.org and services providing generic NLP tasks for semantic similarity and stance detection.
- Score: 1.713291434132985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, misinformation on the Web has become increasingly rampant.
The research community has responded by proposing systems and challenges, which
are beginning to be useful for (various subtasks of) detecting misinformation.
However, most proposed systems are based on deep learning techniques which are
fine-tuned to specific domains, are difficult to interpret and produce results
which are not machine readable. This limits their applicability and adoption as
they can only be used by a select expert audience in very specific settings. In
this paper we propose an architecture based on a core concept of Credibility
Reviews (CRs) that can be used to build networks of distributed bots that
collaborate for misinformation detection. The CRs serve as building blocks to
compose graphs of (i) web content, (ii) existing credibility signals
--fact-checked claims and reputation reviews of websites--, and (iii)
automatically computed reviews. We implement this architecture on top of
lightweight extensions to Schema.org and services providing generic NLP tasks
for semantic similarity and stance detection. Evaluations on existing datasets
of social-media posts, fake news and political speeches demonstrates several
advantages over existing systems: extensibility, domain-independence,
composability, explainability and transparency via provenance. Furthermore, we
obtain competitive results without requiring finetuning and establish a new
state of the art on the Clef'18 CheckThat! Factuality task.
Related papers
- FineFake: A Knowledge-Enriched Dataset for Fine-Grained Multi-Domain Fake News Detection [54.37159298632628]
FineFake is a multi-domain knowledge-enhanced benchmark for fake news detection.
FineFake encompasses 16,909 data samples spanning six semantic topics and eight platforms.
The entire FineFake project is publicly accessible as an open-source repository.
arXiv Detail & Related papers (2024-03-30T14:39:09Z) - DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - COVIDFakeExplainer: An Explainable Machine Learning based Web
Application for Detecting COVID-19 Fake News [1.257018053967058]
This paper establishes BERT as the superior model for fake news detection.
We have implemented a browser extension, enhanced with explainability features, enabling real-time identification of fake news.
Our experiments affirm BERT's exceptional accuracy in detecting COVID-19-related fake news.
arXiv Detail & Related papers (2023-10-21T02:11:39Z) - Building Interpretable and Reliable Open Information Retriever for New
Domains Overnight [67.03842581848299]
Information retrieval is a critical component for many down-stream tasks such as open-domain question answering (QA)
We propose an information retrieval pipeline that uses entity/event linking model and query decomposition model to focus more accurately on different information units of the query.
We show that, while being more interpretable and reliable, our proposed pipeline significantly improves passage coverages and denotation accuracies across five IR and QA benchmarks.
arXiv Detail & Related papers (2023-08-09T07:47:17Z) - Verifying the Robustness of Automatic Credibility Assessment [79.08422736721764]
Text classification methods have been widely investigated as a way to detect content of low credibility.
In some cases insignificant changes in input text can mislead the models.
We introduce BODEGA: a benchmark for testing both victim models and attack methods on misinformation detection tasks.
arXiv Detail & Related papers (2023-03-14T16:11:47Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Deep Learning Architecture for Automatic Essay Scoring [0.0]
We propose a novel architecture based on recurrent networks (RNN) and convolution neural network (CNN)
In the proposed architecture, the multichannel convolutional layer learns and captures the contextual features of the word n-gram from the word embedding vectors.
Our proposed system achieves significantly higher grading accuracy than other deep learning-based AES systems.
arXiv Detail & Related papers (2022-06-16T14:56:24Z) - Neuro-Symbolic Artificial Intelligence (AI) for Intent based Semantic
Communication [85.06664206117088]
6G networks must consider semantics and effectiveness (at end-user) of the data transmission.
NeSy AI is proposed as a pillar for learning causal structure behind the observed data.
GFlowNet is leveraged for the first time in a wireless system to learn the probabilistic structure which generates the data.
arXiv Detail & Related papers (2022-05-22T07:11:57Z) - Discriminatory Expressions to Produce Interpretable Models in Short
Documents [0.0]
State-of-the-art models are black boxes that should not be used to solve problems that may have a social impact.
This paper presents a feature selection mechanism that is able to improve comprehensibility by using less but more meaningful features.
arXiv Detail & Related papers (2020-11-27T19:00:50Z) - P2ExNet: Patch-based Prototype Explanation Network [5.557646286040063]
We propose a novel interpretable network scheme, designed to inherently use an explainable reasoning process inspired by the human cognition.
P2ExNet reaches comparable performance when compared to its counterparts while inherently providing understandable and traceable decisions.
arXiv Detail & Related papers (2020-05-05T08:45:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.