DiversiGATE: A Comprehensive Framework for Reliable Large Language
Models
- URL: http://arxiv.org/abs/2306.13230v2
- Date: Mon, 26 Jun 2023 20:55:56 GMT
- Title: DiversiGATE: A Comprehensive Framework for Reliable Large Language
Models
- Authors: Shima Imani, Ali Beyram, Harsh Shrivastava
- Abstract summary: We introduce DiversiGATE, a unified framework that consolidates diverse methodologies for LLM verification.
We propose a novel SelfLearner' model that conforms to the DiversiGATE framework and refines its performance over time.
Our results demonstrate that our approach outperforms traditional LLMs, achieving a considerable 54.8% -> 61.8% improvement on the GSM8K benchmark.
- Score: 2.616506436169964
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we introduce DiversiGATE, a unified framework that
consolidates diverse methodologies for LLM verification. The proposed framework
comprises two main components: Diversification and Aggregation which provide a
holistic perspective on existing verification approaches, such as
Self-Consistency, Math Prompter and WebGPT. Furthermore, we propose a novel
`SelfLearner' model that conforms to the DiversiGATE framework which can learn
from its own outputs and refine its performance over time, leading to improved
accuracy. To evaluate the effectiveness of SelfLearner, we conducted a rigorous
series of experiments, including tests on synthetic data as well as on popular
arithmetic reasoning benchmarks such as GSM8K. Our results demonstrate that our
approach outperforms traditional LLMs, achieving a considerable 54.8% -> 61.8%
improvement on the GSM8K benchmark.
Related papers
- A Large-Scale Evaluation of Speech Foundation Models [110.95827399522204]
We establish the Speech processing Universal PERformance Benchmark (SUPERB) to study the effectiveness of the foundation model paradigm for speech.
We propose a unified multi-tasking framework to address speech processing tasks in SUPERB using a frozen foundation model followed by task-specialized, lightweight prediction heads.
arXiv Detail & Related papers (2024-04-15T00:03:16Z) - How Far Are We on the Decision-Making of LLMs? Evaluating LLMs' Gaming Ability in Multi-Agent Environments [83.78240828340681]
This research investigates Large Language Models' decision-making capabilities through the lens of Game Theory.
We focus specifically on games that support the participation of more than two agents simultaneously.
We introduce our framework, GAMA-Bench, including eight classical multi-agent games.
arXiv Detail & Related papers (2024-03-18T14:04:47Z) - Self-Supervised Representation Learning with Meta Comprehensive
Regularization [11.387994024747842]
We introduce a module called CompMod with Meta Comprehensive Regularization (MCR), embedded into existing self-supervised frameworks.
We update our proposed model through a bi-level optimization mechanism, enabling it to capture comprehensive features.
We provide theoretical support for our proposed method from information theory and causal counterfactual perspective.
arXiv Detail & Related papers (2024-03-03T15:53:48Z) - MR-GSM8K: A Meta-Reasoning Benchmark for Large Language Model Evaluation [60.65820977963331]
We introduce a novel evaluation paradigm for Large Language Models (LLMs)
This paradigm shifts the emphasis from result-oriented assessments, which often neglect the reasoning process, to a more comprehensive evaluation.
By applying this paradigm in the GSM8K dataset, we have developed the MR-GSM8K benchmark.
arXiv Detail & Related papers (2023-12-28T15:49:43Z) - BAL: Balancing Diversity and Novelty for Active Learning [53.289700543331925]
We introduce a novel framework, Balancing Active Learning (BAL), which constructs adaptive sub-pools to balance diverse and uncertain data.
Our approach outperforms all established active learning methods on widely recognized benchmarks by 1.20%.
arXiv Detail & Related papers (2023-12-26T08:14:46Z) - BLESS: Benchmarking Large Language Models on Sentence Simplification [55.461555829492866]
We present BLESS, a performance benchmark of the most recent state-of-the-art large language models (LLMs) on the task of text simplification (TS)
We assess a total of 44 models, differing in size, architecture, pre-training methods, and accessibility, on three test sets from different domains (Wikipedia, news, and medical) under a few-shot setting.
Our evaluation indicates that the best LLMs, despite not being trained on TS, perform comparably with state-of-the-art TS baselines.
arXiv Detail & Related papers (2023-10-24T12:18:17Z) - Automatic Model Selection with Large Language Models for Reasoning [33.93807127935167]
Chain-of-Thought (CoT) and Program-Aided Language Models (PAL) represent two distinct reasoning methods.
We introduce a model selection method to combine the best of both worlds by employing a large language model.
Our proposed method demonstrates significant performance improvements across eight reasoning datasets.
arXiv Detail & Related papers (2023-05-23T17:57:59Z) - Large Language Models Are Latent Variable Models: Explaining and Finding
Good Demonstrations for In-Context Learning [104.58874584354787]
In recent years, pre-trained large language models (LLMs) have demonstrated remarkable efficiency in achieving an inference-time few-shot learning capability known as in-context learning.
This study aims to examine the in-context learning phenomenon through a Bayesian lens, viewing real-world LLMs as latent variable models.
arXiv Detail & Related papers (2023-01-27T18:59:01Z) - GEDI: GEnerative and DIscriminative Training for Self-Supervised
Learning [3.6804038214708563]
We study state-of-the-art self-supervised learning objectives and propose a unified formulation based on likelihood learning.
We refer to this combined framework as GEDI, which stands for GEnerative and DIscriminative training.
We show that GEDI outperforms existing self-supervised learning strategies in terms of clustering performance by a wide margin.
arXiv Detail & Related papers (2022-12-27T09:33:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.