L3 Ensembles: Lifelong Learning Approach for Ensemble of Foundational
Language Models
- URL: http://arxiv.org/abs/2311.06493v1
- Date: Sat, 11 Nov 2023 06:59:50 GMT
- Title: L3 Ensembles: Lifelong Learning Approach for Ensemble of Foundational
Language Models
- Authors: Aidin Shiri, Kaushik Roy, Amit Sheth, Manas Gaur
- Abstract summary: We propose an approach that focuses on extracting meaningful representations from unseen data and constructing a structured knowledge base.
We conducted experiments on various NLP tasks to validate its effectiveness, including benchmarks like GLUE and SuperGLUE.
The proposed L3 ensemble method increases the model accuracy by 4% 36% compared to the fine-tuned FLM.
- Score: 15.726224465017596
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Fine-tuning pre-trained foundational language models (FLM) for specific tasks
is often impractical, especially for resource-constrained devices. This
necessitates the development of a Lifelong Learning (L3) framework that
continuously adapts to a stream of Natural Language Processing (NLP) tasks
efficiently. We propose an approach that focuses on extracting meaningful
representations from unseen data, constructing a structured knowledge base, and
improving task performance incrementally. We conducted experiments on various
NLP tasks to validate its effectiveness, including benchmarks like GLUE and
SuperGLUE. We measured good performance across the accuracy, training
efficiency, and knowledge transfer metrics. Initial experimental results show
that the proposed L3 ensemble method increases the model accuracy by 4% ~ 36%
compared to the fine-tuned FLM. Furthermore, L3 model outperforms naive
fine-tuning approaches while maintaining competitive or superior performance
(up to 15.4% increase in accuracy) compared to the state-of-the-art language
model (T5) for the given task, STS benchmark.
Related papers
- A Comparative Analysis of Instruction Fine-Tuning LLMs for Financial Text Classification [0.8192907805418583]
Large Language Models (LLMs) have demonstrated impressive capabilities across diverse Natural Language Processing (NLP) tasks.
This study investigates the efficacy of instruction fine-tuning to enhance their performance in financial text classification tasks.
arXiv Detail & Related papers (2024-11-04T18:06:36Z) - Enhancing SLM via ChatGPT and Dataset Augmentation [0.3844771221441211]
We employ knowledge distillation-based techniques and synthetic dataset augmentation to bridge the performance gap between large language models (LLMs) and small language models (SLMs)
Our methods involve two forms of rationale generation--information extraction and informed reasoning--to enrich the ANLI dataset.
Our findings reveal that the incorporation of synthetic rationales significantly improves the model's ability to comprehend natural language, leading to 1.3% and 2.3% higher classification accuracy, respectively, on the ANLI dataset.
arXiv Detail & Related papers (2024-09-19T09:24:36Z) - Improving Retrieval Augmented Language Model with Self-Reasoning [20.715106330314605]
We propose a novel self-reasoning framework aimed at improving the reliability and traceability of RALMs.
The framework involves constructing self-reason trajectories with three processes: a relevance-aware process, an evidence-aware selective process, and a trajectory analysis process.
We have evaluated our framework across four public datasets to demonstrate the superiority of our method.
arXiv Detail & Related papers (2024-07-29T09:05:10Z) - Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - TriSum: Learning Summarization Ability from Large Language Models with Structured Rationale [66.01943465390548]
We introduce TriSum, a framework for distilling large language models' text summarization abilities into a compact, local model.
Our method enhances local model performance on various benchmarks.
It also improves interpretability by providing insights into the summarization rationale.
arXiv Detail & Related papers (2024-03-15T14:36:38Z) - Analyzing and Adapting Large Language Models for Few-Shot Multilingual
NLU: Are We There Yet? [82.02076369811402]
Supervised fine-tuning (SFT), supervised instruction tuning (SIT) and in-context learning (ICL) are three alternative, de facto standard approaches to few-shot learning.
We present an extensive and systematic comparison of the three approaches, testing them on 6 high- and low-resource languages, three different NLU tasks, and a myriad of language and domain setups.
Our observations show that supervised instruction tuning has the best trade-off between performance and resource requirements.
arXiv Detail & Related papers (2024-03-04T10:48:13Z) - EntGPT: Linking Generative Large Language Models with Knowledge Bases [9.067856411512427]
The ability of Large Language Models to generate factually correct output remains relatively unexplored.
We design a three-step hard-prompting method to probe LLMs' ED performance without supervised fine-tuning.
We further improve the knowledge grounding ability through instruction tuning (IT) with similar prompts and responses.
arXiv Detail & Related papers (2024-02-09T19:16:27Z) - Accelerating LLaMA Inference by Enabling Intermediate Layer Decoding via
Instruction Tuning with LITE [62.13435256279566]
Large Language Models (LLMs) have achieved remarkable performance across a wide variety of natural language tasks.
However, their large size makes their inference slow and computationally expensive.
We show that it enables these layers to acquire 'good' generation ability without affecting the generation ability of the final layer.
arXiv Detail & Related papers (2023-10-28T04:07:58Z) - Mixture-of-Experts Meets Instruction Tuning:A Winning Combination for
Large Language Models [125.91897197446379]
We find that MoE models benefit more from instruction tuning than dense models.
Our most powerful model, FLAN-MOE-32B, surpasses the performance of FLAN-PALM-62B on four benchmark tasks.
arXiv Detail & Related papers (2023-05-24T04:22:26Z) - Open-Set Semi-Supervised Learning for 3D Point Cloud Understanding [62.17020485045456]
It is commonly assumed in semi-supervised learning (SSL) that the unlabeled data are drawn from the same distribution as that of the labeled ones.
We propose to selectively utilize unlabeled data through sample weighting, so that only conducive unlabeled data would be prioritized.
arXiv Detail & Related papers (2022-05-02T16:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.