On Unifying Misinformation Detection
- URL: http://arxiv.org/abs/2104.05243v1
- Date: Mon, 12 Apr 2021 07:25:49 GMT
- Title: On Unifying Misinformation Detection
- Authors: Nayeon Lee, Belinda Z. Li, Sinong Wang, Pascale Fung, Hao Ma, Wen-tau
Yih, Madian Khabsa
- Abstract summary: The model is trained to handle four tasks: detecting news bias, clickbait, fake news, and verifying rumors.
We demonstrate that UnifiedM2's learned representation is helpful for few-shot learning of unseen misinformation tasks/datasets.
- Score: 51.10764477798503
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we introduce UnifiedM2, a general-purpose misinformation model
that jointly models multiple domains of misinformation with a single, unified
setup. The model is trained to handle four tasks: detecting news bias,
clickbait, fake news, and verifying rumors. By grouping these tasks together,
UnifiedM2learns a richer representation of misinformation, which leads to
state-of-the-art or comparable performance across all tasks. Furthermore, we
demonstrate that UnifiedM2's learned representation is helpful for few-shot
learning of unseen misinformation tasks/datasets and model's generalizability
to unseen events.
Related papers
- DEGAP: Dual Event-Guided Adaptive Prefixes for Templated-Based Event Argument Extraction with Slot Querying [32.115904077731386]
Recent advancements in event argument extraction (EAE) involve incorporating useful auxiliary information into models during training and inference.
These methods face two challenges: (1) the retrieval results may be irrelevant and (2) templates are developed independently for each event without considering their possible relationship.
We propose DEGAP to address these challenges through a simple yet effective components: dual prefixes, i.e. learnable prompt vectors, and an event-guided adaptive gating mechanism.
arXiv Detail & Related papers (2024-05-22T03:56:55Z) - GenEARL: A Training-Free Generative Framework for Multimodal Event Argument Role Labeling [89.07386210297373]
GenEARL is a training-free generative framework that harnesses the power of modern generative models to understand event task descriptions.
We show that GenEARL outperforms the contrastive pretraining (CLIP) baseline by 9.4% and 14.2% accuracy for zero-shot EARL on the M2E2 and SwiG datasets.
arXiv Detail & Related papers (2024-04-07T00:28:13Z) - Factorized Contrastive Learning: Going Beyond Multi-view Redundancy [116.25342513407173]
This paper proposes FactorCL, a new multimodal representation learning method to go beyond multi-view redundancy.
On large-scale real-world datasets, FactorCL captures both shared and unique information and achieves state-of-the-art results.
arXiv Detail & Related papers (2023-06-08T15:17:04Z) - Musketeer: Joint Training for Multi-task Vision Language Model with Task Explanation Prompts [75.75548749888029]
We present a vision-language model whose parameters are jointly trained on all tasks and fully shared among multiple heterogeneous tasks.
With a single model, Musketeer achieves results comparable to or better than strong baselines trained on single tasks, almost uniformly across multiple tasks.
arXiv Detail & Related papers (2023-05-11T17:57:49Z) - Improving Cross-task Generalization of Unified Table-to-text Models with
Compositional Task Configurations [63.04466647849211]
Methods typically encode task information with a simple dataset name as a prefix to the encoder.
We propose compositional task configurations, a set of prompts prepended to the encoder to improve cross-task generalization.
We show this not only allows the model to better learn shared knowledge across different tasks at training, but also allows us to control the model by composing new configurations.
arXiv Detail & Related papers (2022-12-17T02:20:14Z) - OS-MSL: One Stage Multimodal Sequential Link Framework for Scene
Segmentation and Classification [11.707994658605546]
We propose a general One Stage Multimodal Sequential Link Framework (OS-MSL) to distinguish and leverage the two-fold semantics.
We tailor a specific module called DiffCorrNet to explicitly extract the information of differences and correlations among shots.
arXiv Detail & Related papers (2022-07-04T07:59:34Z) - X-Learner: Learning Cross Sources and Tasks for Universal Visual
Representation [71.51719469058666]
We propose a representation learning framework called X-Learner.
X-Learner learns the universal feature of multiple vision tasks supervised by various sources.
X-Learner achieves strong performance on different tasks without extra annotations, modalities and computational costs.
arXiv Detail & Related papers (2022-03-16T17:23:26Z) - UofA-Truth at Factify 2022 : Transformer And Transfer Learning Based
Multi-Modal Fact-Checking [0.0]
We attempted to tackle the problem of automated misinformation/disinformation detection in multi-modal news sources.
Our model produced an F1-weighted score of 74.807%, which was the fourth best out of all the submissions.
arXiv Detail & Related papers (2022-01-28T18:13:03Z) - MiRANews: Dataset and Benchmarks for Multi-Resource-Assisted News
Summarization [19.062996443574047]
We present a new dataset MiRANews and benchmark existing summarization models.
We show via data analysis that it's not only the models which are to blame.
assisted summarization reduces 55% of hallucinations when compared to single-document summarization models trained on the main article only.
arXiv Detail & Related papers (2021-09-22T10:58:40Z) - A Unified Framework for Generic, Query-Focused, Privacy Preserving and
Update Summarization using Submodular Information Measures [15.520331683061633]
We study submodular information measures as a rich framework for generic, query-focused, privacy sensitive, and update summarization tasks.
We first show that several previous query-focused and update summarization techniques have, unknowingly, used various instantiations of the aforesaid submodular information measures.
We empirically verify our findings on both a synthetic dataset and an existing real-world image collection dataset.
arXiv Detail & Related papers (2020-10-12T12:03:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.