Structured Semantic Information Helps Retrieve Better Examples for In-Context Learning in Few-Shot Relation Extraction
- URL: http://arxiv.org/abs/2601.20803v1
- Date: Wed, 28 Jan 2026 17:48:58 GMT
- Title: Structured Semantic Information Helps Retrieve Better Examples for In-Context Learning in Few-Shot Relation Extraction
- Authors: Aunabil Chakma, Mihai Surdeanu, Eduardo Blanco,
- Abstract summary: We introduce a novel strategy for example selection, in which new examples are selected based on the similarity of their underlying syntactic-semantic structure to the provided one-shot example.<n>When these strategies are combined, the resulting hybrid system achieves a more holistic picture of the relations of interest than either method alone.
- Score: 24.515561762205618
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents several strategies to automatically obtain additional examples for in-context learning of one-shot relation extraction. Specifically, we introduce a novel strategy for example selection, in which new examples are selected based on the similarity of their underlying syntactic-semantic structure to the provided one-shot example. We show that this method results in complementary word choices and sentence structures when compared to LLM-generated examples. When these strategies are combined, the resulting hybrid system achieves a more holistic picture of the relations of interest than either method alone. Our framework transfers well across datasets (FS-TACRED and FS-FewRel) and LLM families (Qwen and Gemma). Overall, our hybrid selection method consistently outperforms alternative strategies and achieves state-of-the-art performance on FS-TACRED and strong gains on a customized FewRel subset.
Related papers
- Reasoning Graph Enhanced Exemplars Retrieval for In-Context Learning [13.381974811214764]
Reasoning Graph-enhanced Exemplar Retrieval (RGER)<n>RGER uses graph kernel to select exemplars with semantic and structural similarity.<n>Our code is released at https://github.com/Yukang-Lin/RGER.
arXiv Detail & Related papers (2024-09-17T12:58:29Z) - Balancing Diversity and Risk in LLM Sampling: How to Select Your Method and Parameter for Open-Ended Text Generation [60.493180081319785]
We propose a systematic way to estimate the capacity of a truncation sampling method by considering the trade-off between diversity and risk at each decoding step.<n>Our work offers a comprehensive comparison of existing truncation sampling methods and serves as a practical user guideline for their parameter selection.
arXiv Detail & Related papers (2024-08-24T14:14:32Z) - SCOI: Syntax-augmented Coverage-based In-context Example Selection for Machine Translation [13.87098305304058]
In this work, we introduce syntactic knowledge to select better in-context examples for machine translation (MT)
We propose a new strategy, namely Syntax-augmented COverage-based In-context example selection (SCOI)
Our proposed SCOI obtains the highest average COMET score among all learning-free methods.
arXiv Detail & Related papers (2024-08-09T05:25:17Z) - Going Beyond Word Matching: Syntax Improves In-context Example Selection for Machine Translation [13.87098305304058]
In-context learning (ICL) is the trending prompting strategy in the era of large language models (LLMs)
Previous works on in-context example selection for machine translation (MT) focus on superficial word-level features.
We propose a syntax-based in-context example selection method for MT, by computing the syntactic similarity between dependency trees.
arXiv Detail & Related papers (2024-03-28T10:13:34Z) - SEER : A Knapsack approach to Exemplar Selection for In-Context HybridQA [1.0323063834827413]
In this work, we present Selection of Exmplars for hybrid Reasoning (SEER), a novel method for selecting a set of exemplars that is both representative and diverse.
The effectiveness of SEER is demonstrated on FinQA and TAT-QA, two real-world benchmarks for HybridQA, where it outperforms previous exemplar selection methods.
arXiv Detail & Related papers (2023-10-10T14:50:20Z) - RetICL: Sequential Retrieval of In-Context Examples with Reinforcement Learning [53.52699766206808]
We propose Retrieval for In-Context Learning (RetICL), a learnable method for modeling and optimally selecting examples sequentially for in-context learning.
We evaluate RetICL on math word problem solving and scientific question answering tasks and show that it consistently outperforms or matches and learnable baselines.
arXiv Detail & Related papers (2023-05-23T20:15:56Z) - Finding Support Examples for In-Context Learning [73.90376920653507]
We propose LENS, a fiLter-thEN-Search method to tackle this challenge in two stages.
First we filter the dataset to obtain informative in-context examples individually.
Then we propose diversity-guided example search which iteratively refines and evaluates the selected example permutations.
arXiv Detail & Related papers (2023-02-27T06:32:45Z) - Federated Learning Aggregation: New Robust Algorithms with Guarantees [63.96013144017572]
Federated learning has been recently proposed for distributed model training at the edge.
This paper presents a complete general mathematical convergence analysis to evaluate aggregation strategies in a federated learning framework.
We derive novel aggregation algorithms which are able to modify their model architecture by differentiating client contributions according to the value of their losses.
arXiv Detail & Related papers (2022-05-22T16:37:53Z) - Weak Augmentation Guided Relational Self-Supervised Learning [80.0680103295137]
We introduce a novel relational self-supervised learning (ReSSL) framework that learns representations by modeling the relationship between different instances.
Our proposed method employs sharpened distribution of pairwise similarities among different instances as textitrelation metric.
Experimental results show that our proposed ReSSL substantially outperforms the state-of-the-art methods across different network architectures.
arXiv Detail & Related papers (2022-03-16T16:14:19Z) - Named Entity Recognition and Relation Extraction using Enhanced Table
Filling by Contextualized Representations [14.614028420899409]
The proposed method computes representations for entity mentions and long-range dependencies without complicated hand-crafted features or neural-network architectures.
We also adapt a tensor dot-product to predict relation labels all at once without resorting to history-based predictions or search strategies.
Despite its simplicity, the experimental results demonstrate that the proposed method outperforms the state-of-the-art methods on the CoNLL04 and ACE05 English datasets.
arXiv Detail & Related papers (2020-10-15T04:58:23Z) - Revisiting LSTM Networks for Semi-Supervised Text Classification via
Mixed Objective Function [106.69643619725652]
We develop a training strategy that allows even a simple BiLSTM model, when trained with cross-entropy loss, to achieve competitive results.
We report state-of-the-art results for text classification task on several benchmark datasets.
arXiv Detail & Related papers (2020-09-08T21:55:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.