Leveraging Declarative Knowledge in Text and First-Order Logic for
Fine-Grained Propaganda Detection
- URL: http://arxiv.org/abs/2004.14201v2
- Date: Mon, 5 Oct 2020 13:08:40 GMT
- Title: Leveraging Declarative Knowledge in Text and First-Order Logic for
Fine-Grained Propaganda Detection
- Authors: Ruize Wang, Duyu Tang, Nan Duan, Wanjun Zhong, Zhongyu Wei, Xuanjing
Huang, Daxin Jiang, Ming Zhou
- Abstract summary: We study the detection of propagandistic text fragments in news articles.
We introduce an approach to inject declarative knowledge of fine-grained propaganda techniques.
- Score: 139.3415751957195
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the detection of propagandistic text fragments in news articles.
Instead of merely learning from input-output datapoints in training data, we
introduce an approach to inject declarative knowledge of fine-grained
propaganda techniques. Specifically, we leverage the declarative knowledge
expressed in both first-order logic and natural language. The former refers to
the logical consistency between coarse- and fine-grained predictions, which is
used to regularize the training process with propositional Boolean expressions.
The latter refers to the literal definition of each propaganda technique, which
is utilized to get class representations for regularizing the model parameters.
We conduct experiments on Propaganda Techniques Corpus, a large manually
annotated dataset for fine-grained propaganda detection. Experiments show that
our method achieves superior performance, demonstrating that leveraging
declarative knowledge can help the model to make more accurate predictions.
Related papers
- Textual Entailment for Effective Triple Validation in Object Prediction [4.94309218465563]
We propose to use textual entailment to validate facts extracted from language models through cloze statements.
Our results show that triple validation based on textual entailment improves language model predictions in different training regimes.
arXiv Detail & Related papers (2024-01-29T16:50:56Z) - Discourse Structures Guided Fine-grained Propaganda Identification [21.680194418287197]
We aim to identify propaganda in political news at two fine-grained levels: sentence-level and token-level.
We propose to incorporate both local and global discourse structures for propaganda discovery.
arXiv Detail & Related papers (2023-10-28T00:18:19Z) - Rationalizing Predictions by Adversarial Information Calibration [65.19407304154177]
We train two models jointly: one is a typical neural model that solves the task at hand in an accurate but black-box manner, and the other is a selector-predictor model that additionally produces a rationale for its prediction.
We use an adversarial technique to calibrate the information extracted by the two models such that the difference between them is an indicator of the missed or over-selected features.
arXiv Detail & Related papers (2023-01-15T03:13:09Z) - Supporting Vision-Language Model Inference with Confounder-pruning Knowledge Prompt [71.77504700496004]
Vision-language models are pre-trained by aligning image-text pairs in a common space to deal with open-set visual concepts.
To boost the transferability of the pre-trained models, recent works adopt fixed or learnable prompts.
However, how and what prompts can improve inference performance remains unclear.
arXiv Detail & Related papers (2022-05-23T07:51:15Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - Factual Probing Is [MASK]: Learning vs. Learning to Recall [8.668111159444273]
Petroni et al. demonstrated that it is possible to retrieve world facts from a pre-trained language model by expressing them as cloze-style prompts.
We make two complementary contributions to better understand these factual probing techniques.
We find, somewhat surprisingly, that the training data used by these methods contains certain regularities of the underlying fact distribution.
arXiv Detail & Related papers (2021-04-12T07:11:40Z) - Cross-Domain Learning for Classifying Propaganda in Online Contents [67.10699378370752]
We present an approach to leverage cross-domain learning, based on labeled documents and sentences from news and tweets, as well as political speeches with a clear difference in their degrees of being propagandistic.
Our experiments demonstrate the usefulness of this approach, and identify difficulties and limitations in various configurations of sources and targets for the transfer step.
arXiv Detail & Related papers (2020-11-13T10:19:13Z) - Commonsense Evidence Generation and Injection in Reading Comprehension [57.31927095547153]
We propose a Commonsense Evidence Generation and Injection framework in reading comprehension, named CEGI.
The framework injects two kinds of auxiliary commonsense evidence into comprehensive reading to equip the machine with the ability of rational thinking.
Experiments on the CosmosQA dataset demonstrate that the proposed CEGI model outperforms the current state-of-the-art approaches.
arXiv Detail & Related papers (2020-05-11T16:31:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.