Zero-Shot Learning and Key Points Are All You Need for Automated Fact-Checking
- URL: http://arxiv.org/abs/2408.08400v1
- Date: Thu, 15 Aug 2024 19:57:42 GMT
- Title: Zero-Shot Learning and Key Points Are All You Need for Automated Fact-Checking
- Authors: Mohammad Ghiasvand Mohammadkhani, Ali Ghiasvand Mohammadkhani, Hamid Beigy,
- Abstract summary: This work introduces a framework based on Zero-Shot Learning and Key Points (ZSL-KeP) for automated fact-checking.
It performs well on the AVeriTeC shared task dataset by robustly improving the baseline and achieving 10th place.
- Score: 10.788661063801703
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated fact-checking is an important task because determining the accurate status of a proposed claim within the vast amount of information available online is a critical challenge. This challenge requires robust evaluation to prevent the spread of false information. Modern large language models (LLMs) have demonstrated high capability in performing a diverse range of Natural Language Processing (NLP) tasks. By utilizing proper prompting strategies, their versatility due to their understanding of large context sizes and zero-shot learning ability enables them to simulate human problem-solving intuition and move towards being an alternative to humans for solving problems. In this work, we introduce a straightforward framework based on Zero-Shot Learning and Key Points (ZSL-KeP) for automated fact-checking, which despite its simplicity, performed well on the AVeriTeC shared task dataset by robustly improving the baseline and achieving 10th place.
Related papers
- Model Tells Itself Where to Attend: Faithfulness Meets Automatic Attention Steering [108.2131720470005]
Large language models (LLMs) have demonstrated remarkable performance across various real-world tasks.
They often struggle to fully comprehend and effectively utilize their input contexts, resulting in responses that are unfaithful or hallucinated.
We propose AutoPASTA, a method that automatically identifies key contextual information and explicitly highlights it by steering an LLM's attention scores.
arXiv Detail & Related papers (2024-09-16T23:52:41Z) - Affordance-Guided Reinforcement Learning via Visual Prompting [51.361977466993345]
Keypoint-based Affordance Guidance for Improvements (KAGI) is a method leveraging rewards shaped by vision-language models (VLMs) for autonomous RL.
On real-world manipulation tasks specified by natural language descriptions, KAGI improves the sample efficiency of autonomous RL and enables successful task completion in 20K online fine-tuning steps.
arXiv Detail & Related papers (2024-07-14T21:41:29Z) - One Stone, Four Birds: A Comprehensive Solution for QA System Using Supervised Contrastive Learning [3.6790609942543187]
This paper presents a novel and comprehensive solution to enhance the robustness and efficiency of question answering (QA) systems through supervised contrastive learning (SCL)
We define four key tasks: user input intent classification, out-of-domain input detection, new intent discovery, and continual learning.
With minimal additional tuning on downstream tasks, our approach significantly improves model efficiency and achieves new state-of-the-art performance across all tasks.
arXiv Detail & Related papers (2024-07-12T06:01:51Z) - Data-CUBE: Data Curriculum for Instruction-based Sentence Representation
Learning [85.66907881270785]
We propose a data curriculum method, namely Data-CUBE, that arranges the orders of all the multi-task data for training.
In the task level, we aim to find the optimal task order to minimize the total cross-task interference risk.
In the instance level, we measure the difficulty of all instances per task, then divide them into the easy-to-difficult mini-batches for training.
arXiv Detail & Related papers (2024-01-07T18:12:20Z) - Enabling Language Models to Implicitly Learn Self-Improvement [49.16868302881804]
Large Language Models (LLMs) have demonstrated remarkable capabilities in open-ended text generation tasks.
We propose an ImPlicit Self-ImprovemenT (PIT) framework that implicitly learns the improvement goal from human preference data.
arXiv Detail & Related papers (2023-10-02T04:29:40Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Information Association for Language Model Updating by Mitigating
LM-Logical Discrepancy [68.31760483418901]
Large Language Models(LLMs) struggle with providing current information due to the outdated pre-training data.
Existing methods for updating LLMs, such as knowledge editing and continual fine-tuning, have significant drawbacks in generalizability of new information.
We identify the core challenge behind these drawbacks: the LM-logical discrepancy featuring the difference between language modeling probabilities and logical probabilities.
arXiv Detail & Related papers (2023-05-29T19:48:37Z) - Retrieval-guided Counterfactual Generation for QA [5.434621727606356]
We focus on the task of creating counterfactuals for question answering.
We develop a Retrieve-Generate-Filter technique to create counterfactual evaluation and training data.
We find that RGF data leads to significant improvements in a model's robustness to local perturbations.
arXiv Detail & Related papers (2021-10-14T17:56:37Z) - Scalable Active Learning for Object Detection [20.99502312184771]
Active learning is a powerful technique to improve data efficiency for supervised learning methods.
We have built a scalable production system for active learning in the domain of autonomous driving.
arXiv Detail & Related papers (2020-04-09T17:28:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.