UofA-Truth at Factify 2022 : Transformer And Transfer Learning Based
Multi-Modal Fact-Checking
- URL: http://arxiv.org/abs/2203.07990v1
- Date: Fri, 28 Jan 2022 18:13:03 GMT
- Title: UofA-Truth at Factify 2022 : Transformer And Transfer Learning Based
Multi-Modal Fact-Checking
- Authors: Abhishek Dhankar, Osmar R. Za\"iane and Francois Bolduc
- Abstract summary: We attempted to tackle the problem of automated misinformation/disinformation detection in multi-modal news sources.
Our model produced an F1-weighted score of 74.807%, which was the fourth best out of all the submissions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Identifying fake news is a very difficult task, especially when considering
the multiple modes of conveying information through text, image, video and/or
audio. We attempted to tackle the problem of automated
misinformation/disinformation detection in multi-modal news sources (including
text and images) through our simple, yet effective, approach in the FACTIFY
shared task at De-Factify@AAAI2022. Our model produced an F1-weighted score of
74.807%, which was the fourth best out of all the submissions. In this paper we
will explain our approach to undertake the shared task.
Related papers
- Embrace Divergence for Richer Insights: A Multi-document Summarization Benchmark and a Case Study on Summarizing Diverse Information from News Articles [136.84278943588652]
We propose a new task of summarizing diverse information encountered in multiple news articles encompassing the same event.
To facilitate this task, we outlined a data collection schema for identifying diverse information and curated a dataset named DiverseSumm.
The dataset includes 245 news stories, with each story comprising 10 news articles and paired with a human-validated reference.
arXiv Detail & Related papers (2023-09-17T20:28:17Z) - Findings of Factify 2: Multimodal Fake News Detection [36.34201719103715]
We present the outcome of the Factify 2 shared task, which provides a multi-modal fact verification and satire news dataset.
The data calls for a comparison based approach to the task by pairing social media claims with supporting documents, with both text and image, divided into 5 classes based on multi-modal relations.
The highest F1 score averaged for all five classes was 81.82%.
arXiv Detail & Related papers (2023-07-19T22:14:49Z) - Fraunhofer SIT at CheckThat! 2023: Mixing Single-Modal Classifiers to
Estimate the Check-Worthiness of Multi-Modal Tweets [0.0]
This paper proposes a novel way of detecting the check-worthiness in multi-modal tweets.
It takes advantage of two classifiers, each trained on a single modality.
For image data, extracting the embedded text with an OCR analysis has shown to perform best.
arXiv Detail & Related papers (2023-07-02T16:35:54Z) - Overview of the Shared Task on Fake News Detection in Urdu at FIRE 2020 [62.6928395368204]
Task was posed as a binary classification task, in which the goal is to differentiate between real and fake news.
We provided a dataset divided into 900 annotated news articles for training and 400 news articles for testing.
42 teams from 6 different countries (India, China, Egypt, Germany, Pakistan, and the UK) registered for the task.
arXiv Detail & Related papers (2022-07-25T03:41:32Z) - Overview of the Shared Task on Fake News Detection in Urdu at FIRE 2021 [55.41644538483948]
The goal of the shared task is to motivate the community to come up with efficient methods for solving this vital problem.
The training set contains 1300 annotated news articles -- 750 real news, 550 fake news, while the testing set contains 300 news articles -- 200 real, 100 fake news.
The best performing system obtained an F1-macro score of 0.679, which is lower than the past year's best result of 0.907 F1-macro.
arXiv Detail & Related papers (2022-07-11T18:58:36Z) - Automated Evidence Collection for Fake News Detection [11.324403127916877]
We propose a novel approach that improves over the current automatic fake news detection approaches.
Our approach extracts supporting evidence from the web articles and then selects appropriate text to be treated as evidence sets.
Our experiments, using both machine learning and deep learning-based methods, help perform an extensive evaluation of our approach.
arXiv Detail & Related papers (2021-12-13T09:38:41Z) - VATT: Transformers for Multimodal Self-Supervised Learning from Raw
Video, Audio and Text [60.97904439526213]
Video-Audio-Text Transformer (VATT) takes raw signals as inputs and extracts multimodal representations that are rich enough to benefit a variety of downstream tasks.
We train VATT end-to-end from scratch using multimodal contrastive losses and evaluate its performance by the downstream tasks of video action recognition, audio event classification, image classification, and text-to-video retrieval.
arXiv Detail & Related papers (2021-04-22T17:07:41Z) - On Unifying Misinformation Detection [51.10764477798503]
The model is trained to handle four tasks: detecting news bias, clickbait, fake news, and verifying rumors.
We demonstrate that UnifiedM2's learned representation is helpful for few-shot learning of unseen misinformation tasks/datasets.
arXiv Detail & Related papers (2021-04-12T07:25:49Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.