MM-Claims: A Dataset for Multimodal Claim Detection in Social Media
- URL: http://arxiv.org/abs/2205.01989v1
- Date: Wed, 4 May 2022 10:43:58 GMT
- Title: MM-Claims: A Dataset for Multimodal Claim Detection in Social Media
- Authors: Gullal S. Cheema, Sherzod Hakimov, Abdul Sittar, Eric M\"uller-Budack,
Christian Otto, Ralph Ewerth
- Abstract summary: We introduce a novel dataset, MM-Claims, which consists of tweets and corresponding images over three topics: COVID-19, Climate Change and broadly Technology.
We describe the dataset in detail, evaluate strong unimodal and multimodal baselines, and analyze the potential and drawbacks of current models.
- Score: 7.388174516838141
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, the problem of misinformation on the web has become
widespread across languages, countries, and various social media platforms.
Although there has been much work on automated fake news detection, the role of
images and their variety are not well explored. In this paper, we investigate
the roles of image and text at an earlier stage of the fake news detection
pipeline, called claim detection. For this purpose, we introduce a novel
dataset, MM-Claims, which consists of tweets and corresponding images over
three topics: COVID-19, Climate Change and broadly Technology. The dataset
contains roughly 86000 tweets, out of which 3400 are labeled manually by
multiple annotators for the training and evaluation of multimodal models. We
describe the dataset in detail, evaluate strong unimodal and multimodal
baselines, and analyze the potential and drawbacks of current models.
Related papers
- 3AM: An Ambiguity-Aware Multi-Modal Machine Translation Dataset [90.95948101052073]
We introduce 3AM, an ambiguity-aware MMT dataset comprising 26,000 parallel sentence pairs in English and Chinese.
Our dataset is specifically designed to include more ambiguity and a greater variety of both captions and images than other MMT datasets.
Experimental results show that MMT models trained on our dataset exhibit a greater ability to exploit visual information than those trained on other MMT datasets.
arXiv Detail & Related papers (2024-04-29T04:01:30Z) - M2SA: Multimodal and Multilingual Model for Sentiment Analysis of Tweets [4.478789600295492]
This paper transforms an existing textual Twitter sentiment dataset into a multimodal format through a straightforward curation process.
Our work opens up new avenues for sentiment-related research within the research community.
arXiv Detail & Related papers (2024-04-02T09:11:58Z) - Multi-modal Stance Detection: New Datasets and Model [56.97470987479277]
We study multi-modal stance detection for tweets consisting of texts and images.
We propose a simple yet effective Targeted Multi-modal Prompt Tuning framework (TMPT)
TMPT achieves state-of-the-art performance in multi-modal stance detection.
arXiv Detail & Related papers (2024-02-22T05:24:19Z) - Detecting and Grounding Multi-Modal Media Manipulation and Beyond [93.08116982163804]
We highlight a new research problem for multi-modal fake media, namely Detecting and Grounding Multi-Modal Media Manipulation (DGM4)
DGM4 aims to not only detect the authenticity of multi-modal media, but also ground the manipulated content.
We propose a novel HierArchical Multi-modal Manipulation rEasoning tRansformer (HAMMER) to fully capture the fine-grained interaction between different modalities.
arXiv Detail & Related papers (2023-09-25T15:05:46Z) - Image Matters: A New Dataset and Empirical Study for Multimodal
Hyperbole Detection [52.04083398850383]
We create a multimodal detection dataset from Weibo (a Chinese social media)
We treat the text and image from a piece of weibo as two modalities and explore the role of text and image for hyperbole detection.
Different pre-trained multimodal encoders are also evaluated on this downstream task to show their performance.
arXiv Detail & Related papers (2023-07-01T03:23:56Z) - Beyond Triplet: Leveraging the Most Data for Multimodal Machine
Translation [53.342921374639346]
Multimodal machine translation aims to improve translation quality by incorporating information from other modalities, such as vision.
Previous MMT systems mainly focus on better access and use of visual information and tend to validate their methods on image-related datasets.
This paper establishes new methods and new datasets for MMT.
arXiv Detail & Related papers (2022-12-20T15:02:38Z) - Multimodal Fake News Detection with Adaptive Unimodal Representation
Aggregation [28.564442206829625]
AURA is a multimodal fake news detection network with adaptive unimodal representation aggregation.
We perform coarse-level fake news detection and cross-modal cosistency learning according to the unimodal and multimodal representations.
Experiments on Weibo and Gossipcop prove that AURA can successfully beat several state-of-the-art FND schemes.
arXiv Detail & Related papers (2022-06-12T14:06:55Z) - Logically at the Factify 2022: Multimodal Fact Verification [2.8914815569249823]
This paper describes our participant system for the multi-modal fact verification (Factify) challenge at AAAI 2022.
Two baseline approaches are proposed and explored including an ensemble model and a multi-modal attention network.
Our best model is ranked first in leaderboard which obtains a weighted average F-measure of 0.77 on both validation and test set.
arXiv Detail & Related papers (2021-12-16T23:34:07Z) - Exploiting BERT For Multimodal Target SentimentClassification Through
Input Space Translation [75.82110684355979]
We introduce a two-stream model that translates images in input space using an object-aware transformer.
We then leverage the translation to construct an auxiliary sentence that provides multimodal information to a language model.
We achieve state-of-the-art performance on two multimodal Twitter datasets.
arXiv Detail & Related papers (2021-08-03T18:02:38Z) - On the Role of Images for Analyzing Claims in Social Media [3.8142537449670963]
We present an empirical study on visual, textual, and multimodal models for the tasks of claim, claim check-worthiness, and conspiracy detection.
Recent work suggests that images are more influential than text and often appear alongside fake text.
arXiv Detail & Related papers (2021-03-17T12:40:27Z) - Multimodal Analytics for Real-world News using Measures of Cross-modal
Entity Consistency [8.401772200450417]
Multimodal information, e.g., enriching text with photos, is typically used to convey the news more effectively or to attract attention.
We introduce a novel task of cross-modal consistency verification in real-world news and present a multimodal approach to quantify the entity coherence between image and text.
arXiv Detail & Related papers (2020-03-23T17:49:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.