IDIAPers @ Causal News Corpus 2022: Efficient Causal Relation
Identification Through a Prompt-based Few-shot Approach
- URL: http://arxiv.org/abs/2209.03895v1
- Date: Thu, 8 Sep 2022 16:03:50 GMT
- Title: IDIAPers @ Causal News Corpus 2022: Efficient Causal Relation
Identification Through a Prompt-based Few-shot Approach
- Authors: Sergio Burdisso, Juan Zuluaga-Gomez, Esau Villatoro-Tello, Martin
Fajcik, Muskaan Singh, Pavel Smrz, Petr Motlicek
- Abstract summary: We address the Causal Relation Identification (CRI) task by exploiting a set of simple yet complementary techniques for fine-tuning language models (LMs)
We follow a prompt-based prediction approach for fine-tuning LMs in which the CRI task is treated as a masked language modeling problem (MLM)
We compare the performance of this method against ensemble techniques trained on the entire dataset.
- Score: 3.4423596432619754
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we describe our participation in the subtask 1 of CASE-2022,
Event Causality Identification with Casual News Corpus. We address the Causal
Relation Identification (CRI) task by exploiting a set of simple yet
complementary techniques for fine-tuning language models (LMs) on a small
number of annotated examples (i.e., a few-shot configuration). We follow a
prompt-based prediction approach for fine-tuning LMs in which the CRI task is
treated as a masked language modeling problem (MLM). This approach allows LMs
natively pre-trained on MLM problems to directly generate textual responses to
CRI-specific prompts. We compare the performance of this method against
ensemble techniques trained on the entire dataset. Our best-performing
submission was trained only with 256 instances per class, a small portion of
the entire dataset, and yet was able to obtain the second-best precision
(0.82), third-best accuracy (0.82), and an F1-score (0.85) very close to what
was reported by the winner team (0.86).
Related papers
- New Dataset and Methods for Fine-Grained Compositional Referring Expression Comprehension via Specialist-MLLM Collaboration [49.180693704510006]
Referring Expression (REC) is a cross-modal task that evaluates the interplay of language understanding, image comprehension, and language-to-image grounding.
We introduce a new REC dataset with two key features. First, it is designed with controllable difficulty levels, requiring fine-grained reasoning across object categories, attributes, and relationships.
Second, it incorporates negative text and images generated through fine-grained editing, explicitly testing a model's ability to reject non-existent targets.
arXiv Detail & Related papers (2025-02-27T13:58:44Z) - SPARC: Score Prompting and Adaptive Fusion for Zero-Shot Multi-Label Recognition in Vision-Language Models [74.40683913645731]
Zero-shot multi-label recognition (MLR) with Vision-Language Models (VLMs) faces significant challenges without training data, model tuning, or architectural modifications.
Our work proposes a novel solution treating VLMs as black boxes, leveraging scores without training data or ground truth.
Analysis of these prompt scores reveals VLM biases and AND''/OR' signal ambiguities, notably that maximum scores are surprisingly suboptimal compared to second-highest scores.
arXiv Detail & Related papers (2025-02-24T07:15:05Z) - Fine-tuned Large Language Models (LLMs): Improved Prompt Injection Attacks Detection [6.269725911814401]
Large language models (LLMs) are becoming a popular tool as they have significantly advanced in their capability to tackle a wide range of language-based tasks.
However, LLMs applications are highly vulnerable to prompt injection attacks, which poses a critical problem.
This project explores the security vulnerabilities in relation to prompt injection attacks.
arXiv Detail & Related papers (2024-10-28T00:36:21Z) - SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning [70.21358720599821]
Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts.
We propose SELF-GUIDE, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM.
We report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics.
arXiv Detail & Related papers (2024-07-16T04:41:58Z) - Low-resource classification of mobility functioning information in
clinical sentences using large language models [0.0]
This study evaluates the ability of publicly available large language models (LLMs) to accurately identify the presence of functioning information from clinical notes.
We collect a balanced binary classification dataset of 1000 sentences from the Mobility NER dataset, which was curated from n2c2 clinical notes.
arXiv Detail & Related papers (2023-12-15T20:59:17Z) - Look Before You Leap: A Universal Emergent Decomposition of Retrieval
Tasks in Language Models [58.57279229066477]
We study how language models (LMs) solve retrieval tasks in diverse situations.
We introduce ORION, a collection of structured retrieval tasks spanning six domains.
We find that LMs internally decompose retrieval tasks in a modular way.
arXiv Detail & Related papers (2023-12-13T18:36:43Z) - Contextual Biasing of Named-Entities with Large Language Models [12.396054621526643]
This paper studies contextual biasing with Large Language Models (LLMs)
During second-pass rescoring additional contextual information is provided to a LLM to boost Automatic Speech Recognition (ASR) performance.
We propose to leverage prompts for a LLM without fine tuning during rescoring which incorporate a biasing list and few-shot examples.
arXiv Detail & Related papers (2023-09-01T20:15:48Z) - Instruction Position Matters in Sequence Generation with Large Language
Models [67.87516654892343]
Large language models (LLMs) are capable of performing conditional sequence generation tasks, such as translation or summarization.
We propose enhancing the instruction-following capability of LLMs by shifting the position of task instructions after the input sentences.
arXiv Detail & Related papers (2023-08-23T12:36:57Z) - Estimating Large Language Model Capabilities without Labeled Test Data [51.428562302037534]
Large Language Models (LLMs) have the impressive ability to perform in-context learning (ICL) from only a few examples.
We propose the task of ICL accuracy estimation, in which we predict the accuracy of an LLM when doing in-context learning on a new task.
arXiv Detail & Related papers (2023-05-24T06:55:09Z) - SatLM: Satisfiability-Aided Language Models Using Declarative Prompting [68.40726892904286]
We propose a new satisfiability-aided language modeling (SatLM) approach for improving the reasoning capabilities of large language models (LLMs)
We use an LLM to generate a declarative task specification rather than an imperative program and leverage an off-the-shelf automated theorem prover to derive the final answer.
We evaluate SATLM on 8 different datasets and show that it consistently outperforms program-aided LMs in the imperative paradigm.
arXiv Detail & Related papers (2023-05-16T17:55:51Z) - Toward Efficient Language Model Pretraining and Downstream Adaptation
via Self-Evolution: A Case Study on SuperGLUE [203.65227947509933]
This report describes our JDExplore d-team's Vega v2 submission on the SuperGLUE leaderboard.
SuperGLUE is more challenging than the widely used general language understanding evaluation (GLUE) benchmark, containing eight difficult language understanding tasks.
arXiv Detail & Related papers (2022-12-04T15:36:18Z) - Towards No.1 in CLUE Semantic Matching Challenge: Pre-trained Language
Model Erlangshen with Propensity-Corrected Loss [12.034243662298035]
This report describes a pre-trained language model Erlangshen with propensity-corrected loss.
We construct a dynamic masking strategy based on knowledge in Masked Language Modeling (MLM) with whole word masking.
Overall, we achieve 72.54 points in F1 Score and 78.90 points in Accuracy on the test set.
arXiv Detail & Related papers (2022-08-05T02:52:29Z) - Generating Training Data with Language Models: Towards Zero-Shot
Language Understanding [35.92571138322246]
Pretrained language models (PLMs) have demonstrated remarkable performance in various natural language processing tasks.
We present a simple approach that uses both types of PLMs for fully zero-shot learning of NLU tasks.
Our approach demonstrates strong performance across seven classification tasks of the GLUE benchmark.
arXiv Detail & Related papers (2022-02-09T16:02:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.