Logically at the Factify 2022: Multimodal Fact Verification
- URL: http://arxiv.org/abs/2112.09253v1
- Date: Thu, 16 Dec 2021 23:34:07 GMT
- Title: Logically at the Factify 2022: Multimodal Fact Verification
- Authors: Jie Gao, Hella-Franziska Hoffmann, Stylianos Oikonomou, David
Kiskovski, Anil Bandhakavi
- Abstract summary: This paper describes our participant system for the multi-modal fact verification (Factify) challenge at AAAI 2022.
Two baseline approaches are proposed and explored including an ensemble model and a multi-modal attention network.
Our best model is ranked first in leaderboard which obtains a weighted average F-measure of 0.77 on both validation and test set.
- Score: 2.8914815569249823
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper describes our participant system for the multi-modal fact
verification (Factify) challenge at AAAI 2022. Despite the recent advance in
text based verification techniques and large pre-trained multimodal models
cross vision and language, very limited work has been done in applying
multimodal techniques to automate fact checking process, particularly
considering the increasing prevalence of claims and fake news about images and
videos on social media. In our work, the challenge is treated as multimodal
entailment task and framed as multi-class classification. Two baseline
approaches are proposed and explored including an ensemble model (combining two
uni-modal models) and a multi-modal attention network (modeling the interaction
between image and text pair from claim and evidence document). We conduct
several experiments investigating and benchmarking different SoTA pre-trained
transformers and vision models in this work. Our best model is ranked first in
leaderboard which obtains a weighted average F-measure of 0.77 on both
validation and test set. Exploratory analysis of dataset is also carried out on
the Factify data set and uncovers salient patterns and issues (e.g., word
overlapping, visual entailment correlation, source bias) that motivates our
hypothesis. Finally, we highlight challenges of the task and multimodal dataset
for future research.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.