Integrating Pattern- and Fact-based Fake News Detection via Model
Preference Learning
- URL: http://arxiv.org/abs/2109.11333v1
- Date: Thu, 23 Sep 2021 12:28:55 GMT
- Title: Integrating Pattern- and Fact-based Fake News Detection via Model
Preference Learning
- Authors: Qiang Sheng, Xueyao Zhang, Juan Cao, Lei Zhong
- Abstract summary: We study the problem of integrating pattern- and fact-based models into one framework.
We build a Preference-aware Fake News Detection Framework (Pref-FEND), which learns the respective preferences of pattern- and fact-based models for joint detection.
- Score: 6.92027612631023
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: To defend against fake news, researchers have developed various methods based
on texts. These methods can be grouped as 1) pattern-based methods, which focus
on shared patterns among fake news posts rather than the claim itself; and 2)
fact-based methods, which retrieve from external sources to verify the claim's
veracity without considering patterns. The two groups of methods, which have
different preferences of textual clues, actually play complementary roles in
detecting fake news. However, few works consider their integration. In this
paper, we study the problem of integrating pattern- and fact-based models into
one framework via modeling their preference differences, i.e., making the
pattern- and fact-based models focus on respective preferred parts in a post
and mitigate interference from non-preferred parts as possible. To this end, we
build a Preference-aware Fake News Detection Framework (Pref-FEND), which
learns the respective preferences of pattern- and fact-based models for joint
detection. We first design a heterogeneous dynamic graph convolutional network
to generate the respective preference maps, and then use these maps to guide
the joint learning of pattern- and fact-based models for final prediction.
Experiments on two real-world datasets show that Pref-FEND effectively captures
model preferences and improves the performance of models based on patterns,
facts, or both.
Related papers
- Human-Object Interaction Detection Collaborated with Large Relation-driven Diffusion Models [65.82564074712836]
We introduce DIFfusionHOI, a new HOI detector shedding light on text-to-image diffusion models.
We first devise an inversion-based strategy to learn the expression of relation patterns between humans and objects in embedding space.
These learned relation embeddings then serve as textual prompts, to steer diffusion models generate images that depict specific interactions.
arXiv Detail & Related papers (2024-10-26T12:00:33Z) - Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Universal Domain Adaptation from Foundation Models: A Baseline Study [58.51162198585434]
We make empirical studies of state-of-the-art UniDA methods using foundation models.
We introduce textitCLIP distillation, a parameter-free method specifically designed to distill target knowledge from CLIP models.
Although simple, our method outperforms previous approaches in most benchmark tasks.
arXiv Detail & Related papers (2023-05-18T16:28:29Z) - Debiasing Vision-Language Models via Biased Prompts [79.04467131711775]
We propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding.
We show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models.
arXiv Detail & Related papers (2023-01-31T20:09:33Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - Artificial Interrogation for Attributing Language Models [0.0]
The challenge provides twelve open-sourced base versions of popular language models and twelve fine-tuned language models for text generation.
The goal of the contest is to identify which fine-tuned models originated from which base model.
We have employed four distinct approaches for measuring the resemblance between the responses generated from the models of both sets.
arXiv Detail & Related papers (2022-11-20T05:46:29Z) - Distributional Depth-Based Estimation of Object Articulation Models [21.046351215949525]
We propose a method that efficiently learns distributions over articulation model parameters directly from depth images.
Our core contributions include a novel representation for distributions over rigid body transformations.
We introduce a novel deep learning based approach, DUST-net, that performs category-independent articulation model estimation.
arXiv Detail & Related papers (2021-08-12T17:44:51Z) - A Multi-Level Attention Model for Evidence-Based Fact Checking [58.95413968110558]
We present a simple model that can be trained on sequence structures.
Results on a large-scale dataset for Fact Extraction and VERification show that our model outperforms the graph-based approaches.
arXiv Detail & Related papers (2021-06-02T05:40:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.