Small Language Model Is a Good Guide for Large Language Model in Chinese
Entity Relation Extraction
- URL: http://arxiv.org/abs/2402.14373v1
- Date: Thu, 22 Feb 2024 08:26:56 GMT
- Title: Small Language Model Is a Good Guide for Large Language Model in Chinese
Entity Relation Extraction
- Authors: Xuemei Tang and Jun Wang and Qi Su
- Abstract summary: In this paper, we propose SLCoLM, a model collaboration framework, to mitigate the data long-tail problem.
We use the textitTraining-Guide-Predict'' strategy to combine the strengths of pre-trained language models (PLMs) and large language models (LLMs)
Our experiments on a RE dataset rich in relation types show that the approach in this paper facilitates RE of long-tail relation types.
- Score: 13.344709924683471
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently, large language models (LLMs) have been successful in relational
extraction (RE) tasks, especially in the few-shot learning. An important
problem in the field of RE is long-tailed data, while not much attention is
currently paid to this problem using LLM approaches. Therefore, in this paper,
we propose SLCoLM, a model collaboration framework, to mitigate the data
long-tail problem. In our framework, We use the
``\textit{Training-Guide-Predict}'' strategy to combine the strengths of
pre-trained language models (PLMs) and LLMs, where a task-specific PLM
framework acts as a tutor, transfers task knowledge to the LLM, and guides the
LLM in performing RE tasks. Our experiments on a RE dataset rich in relation
types show that the approach in this paper facilitates RE of long-tail relation
types.
Related papers
- Are LLMs Good Annotators for Discourse-level Event Relation Extraction? [15.365993658296016]
Large Language Models (LLMs) have demonstrated proficiency in a wide array of natural language processing tasks.
Our study reveals a notable underperformance of LLMs compared to the baseline established through supervised learning.
arXiv Detail & Related papers (2024-07-28T19:27:06Z) - SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning [70.21358720599821]
Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts.
We propose SELF-GUIDE, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM.
We report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics.
arXiv Detail & Related papers (2024-07-16T04:41:58Z) - Learning to Reduce: Towards Improving Performance of Large Language Models on Structured Data [39.29778853025738]
Large Language Models (LLMs) have been achieving competent performance on a wide range of downstream tasks.
This paper proposes a framework, Learning to Reduce, that fine-tunes a language model with On-Policy Learning to generate a reduced version of an input structured data.
arXiv Detail & Related papers (2024-07-03T01:51:50Z) - Learning to Plan for Retrieval-Augmented Large Language Models from Knowledge Graphs [59.76268575344119]
We introduce a novel framework for enhancing large language models' (LLMs) planning capabilities by using planning data derived from knowledge graphs (KGs)
LLMs fine-tuned with KG data have improved planning capabilities, better equipping them to handle complex QA tasks that involve retrieval.
arXiv Detail & Related papers (2024-06-20T13:07:38Z) - Towards Efficient LLM Grounding for Embodied Multi-Agent Collaboration [70.09561665520043]
We propose a novel framework for multi-agent collaboration that introduces Reinforced Advantage feedback (ReAd) for efficient self-refinement of plans.
We provide theoretical analysis by extending advantage-weighted regression in reinforcement learning to multi-agent systems.
Experiments on Over-AI and a difficult variant of RoCoBench show that ReAd surpasses baselines in success rate, and also significantly decreases the interaction steps of agents.
arXiv Detail & Related papers (2024-05-23T08:33:19Z) - Found in the Middle: How Language Models Use Long Contexts Better via
Plug-and-Play Positional Encoding [78.36702055076456]
This paper introduces Multi-scale Positional.
(Ms-PoE) which is a simple yet effective plug-and-play approach to enhance the capacity of.
LLMs to handle relevant information located in the middle of the context.
arXiv Detail & Related papers (2024-03-05T04:58:37Z) - Unsupervised Information Refinement Training of Large Language Models for Retrieval-Augmented Generation [128.01050030936028]
We propose an information refinement training method named InFO-RAG.
InFO-RAG is low-cost and general across various tasks.
It improves the performance of LLaMA2 by an average of 9.39% relative points.
arXiv Detail & Related papers (2024-02-28T08:24:38Z) - Learning to Reduce: Optimal Representations of Structured Data in
Prompting Large Language Models [42.16047343029512]
Large Language Models (LLMs) have been widely used as general-purpose AI agents.
We propose a framework, Learning to Reduce, that fine-tunes a language model to generate a reduced version of an input context.
We show that our model achieves comparable accuracies in selecting the relevant evidence from an input context.
arXiv Detail & Related papers (2024-02-22T00:41:23Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.