GPTs at Factify 2022: Prompt Aided Fact-Verification
- URL: http://arxiv.org/abs/2206.14913v1
- Date: Wed, 29 Jun 2022 21:07:39 GMT
- Title: GPTs at Factify 2022: Prompt Aided Fact-Verification
- Authors: Pawan Kumar Sahu, Saksham Aggarwal, Taneesh Gupta, Gyanendra Das
- Abstract summary: We present our solution based on two approaches - PLM (pre-trained language model) based method and Prompt based method.
We achieved an F1 score of 0.6946 on the FACTIFY dataset and a 7th position on the competition leader-board.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One of the most pressing societal issues is the fight against false news. The
false claims, as difficult as they are to expose, create a lot of damage. To
tackle the problem, fact verification becomes crucial and thus has been a topic
of interest among diverse research communities. Using only the textual form of
data we propose our solution to the problem and achieve competitive results
with other approaches. We present our solution based on two approaches - PLM
(pre-trained language model) based method and Prompt based method. The
PLM-based approach uses the traditional supervised learning, where the model is
trained to take 'x' as input and output prediction 'y' as P(y|x). Whereas,
Prompt-based learning reflects the idea to design input to fit the model such
that the original objective may be re-framed as a problem of (masked) language
modeling. We may further stimulate the rich knowledge provided by PLMs to
better serve downstream tasks by employing extra prompts to fine-tune PLMs. Our
experiments showed that the proposed method performs better than just
fine-tuning PLMs. We achieved an F1 score of 0.6946 on the FACTIFY dataset and
a 7th position on the competition leader-board.
Related papers
- MindStar: Enhancing Math Reasoning in Pre-trained LLMs at Inference Time [51.5039731721706]
MindStar is a purely inference-based searching method for large language models.
It formulates reasoning tasks as searching problems and proposes two search ideas to identify the optimal reasoning paths.
It significantly enhances the reasoning abilities of open-source models, such as Llama-2-13B and Mistral-7B, and achieves comparable performance to GPT-3.5 and Grok-1.
arXiv Detail & Related papers (2024-05-25T15:07:33Z) - Reinforcement Learning from Multi-role Debates as Feedback for Bias Mitigation in LLMs [6.090496490133132]
We propose Reinforcement Learning from Multi-role Debates as Feedback (RLDF), a novel approach for bias mitigation replacing human feedback in traditional RLHF.
We utilize LLMs in multi-role debates to create a dataset that includes both high-bias and low-bias instances for training the reward model in reinforcement learning.
arXiv Detail & Related papers (2024-04-15T22:18:50Z) - Give Me the Facts! A Survey on Factual Knowledge Probing in Pre-trained
Language Models [2.3981254787726067]
Pre-trained Language Models (PLMs) are trained on vast unlabeled data, rich in world knowledge.
This has sparked the interest of the community in quantifying the amount of factual knowledge present in PLMs.
In this work, we survey methods and datasets that are used to probe PLMs for factual knowledge.
arXiv Detail & Related papers (2023-10-25T11:57:13Z) - Making Pre-trained Language Models both Task-solvers and
Self-calibrators [52.98858650625623]
Pre-trained language models (PLMs) serve as backbones for various real-world systems.
Previous work shows that introducing an extra calibration task can mitigate this issue.
We propose a training algorithm LM-TOAST to tackle the challenges.
arXiv Detail & Related papers (2023-07-21T02:51:41Z) - Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs [60.61002524947733]
Previous confidence elicitation methods rely on white-box access to internal model information or model fine-tuning.
This leads to a growing need to explore the untapped area of black-box approaches for uncertainty estimation.
We define a systematic framework with three components: prompting strategies for eliciting verbalized confidence, sampling methods for generating multiple responses, and aggregation techniques for computing consistency.
arXiv Detail & Related papers (2023-06-22T17:31:44Z) - Pre-training Language Models with Deterministic Factual Knowledge [42.812774794720895]
We propose to let PLMs learn the deterministic relationship between the remaining context and the masked content.
Two pre-training tasks are introduced to motivate PLMs to rely on the deterministic relationship when filling masks.
Experiments indicate that the continuously pre-trained PLMs achieve better robustness in factual knowledge capturing.
arXiv Detail & Related papers (2022-10-20T11:04:09Z) - PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model Adaptation [89.0074567748505]
We propose a new metric to accurately predict the prompt transferability (regarding (i)), and a novel PoT approach (namely PANDA)
Our proposed metric works well to predict the prompt transferability; 2) our PANDA consistently outperforms the vanilla PoT approach by 2.3% average score (up to 24.1%) among all tasks and model sizes; 3) with our PANDA approach, prompt-tuning can achieve competitive and even better performance than model-tuning in various PLM scales scenarios.
arXiv Detail & Related papers (2022-08-22T09:14:14Z) - Prompt Tuning for Discriminative Pre-trained Language Models [96.04765512463415]
Recent works have shown promising results of prompt tuning in stimulating pre-trained language models (PLMs) for natural language processing (NLP) tasks.
It is still unknown whether and how discriminative PLMs, e.g., ELECTRA, can be effectively prompt-tuned.
We present DPT, the first prompt tuning framework for discriminative PLMs, which reformulates NLP tasks into a discriminative language modeling problem.
arXiv Detail & Related papers (2022-05-23T10:11:50Z) - RuleBert: Teaching Soft Rules to Pre-trained Language Models [21.69870624809201]
We introduce a classification task where, given facts and soft rules, the PLM should return a prediction with a probability for a given hypothesis.
We propose a revised loss function that enables the PLM to learn how to predict precise probabilities for the task.
Our evaluation results show that the resulting fine-tuned models achieve very high performance, even on logical rules that were unseen at training.
arXiv Detail & Related papers (2021-09-24T16:19:25Z) - M2P2: Multimodal Persuasion Prediction using Adaptive Fusion [65.04045695380333]
This paper solves two problems: the Debate Outcome Prediction (DOP) problem predicts who wins a debate and the Intensity of Persuasion Prediction (IPP) problem predicts the change in the number of votes before and after a speaker speaks.
Our M2P2 framework is the first to use multimodal (acoustic, visual, language) data to solve the IPP problem.
arXiv Detail & Related papers (2020-06-03T18:47:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.