Structured Thoughts Automaton: First Formalized Execution Model for
Auto-Regressive Language Models
- URL: http://arxiv.org/abs/2306.10196v1
- Date: Fri, 16 Jun 2023 22:04:50 GMT
- Title: Structured Thoughts Automaton: First Formalized Execution Model for
Auto-Regressive Language Models
- Authors: Tristan Vanderbruggen, Chunhua Liao, Peter Pirkelbauer, Pei-Hung Lin
- Abstract summary: We introduce a new algorithm for sampling the predictions of LMs, which we use to build a reliable and inspectable execution model.
We introduce a low-level language to write "cognitive program" for this execution model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent months, Language Models (LMs) have become a part of daily
discourse, with focus on OpenAI and the potential of Artificial General
Intelligence (AGI). Furthermore, the leaking of LLama's weights to the public
has led to an influx of innovations demonstrating the impressive capabilities
of generative LMs. While we believe that AGI is still a distant goal, we
recognize the potential of LMs in solving tasks such as searching complex
documents, compiling reports with basic analysis, and providing assistance in
problem-solving. In this paper, we propose formalizing the execution model of
language models. We investigate current execution models, to find that this
formalism has received little attention, and present our contribution: the
first formalized execution model for LMs. We introduce a new algorithm for
sampling the predictions of LMs, which we use to build a reliable and
inspectable execution model. We introduce a low-level language to write
"cognitive program" for this execution model. We hope to shed light on the need
for execution models for LMs and encourage further research in this area.
Related papers
- Scaling Diffusion Language Models via Adaptation from Autoregressive Models [105.70889434492143]
Diffusion Language Models (DLMs) have emerged as a promising new paradigm for text generative modeling.
We show that we can convert AR models ranging from 127M to 7B parameters into diffusion models DiffuGPT and DiffuLLaMA, using less than 200B tokens for training.
Our experimental results reveal that these models outperform earlier DLMs and are competitive with their AR counterparts.
arXiv Detail & Related papers (2024-10-23T14:04:22Z) - Cognitive Modeling with Scaffolded LLMs: A Case Study of Referential Expression Generation [5.5711773076846365]
We explore a neuro-symbolic implementation of an algorithmic cognitive model of referential expression generation.
We find that our hybrid approach is cognitively plausible and performs well in complex contexts.
arXiv Detail & Related papers (2024-07-04T10:28:48Z) - Collective Constitutional AI: Aligning a Language Model with Public Input [20.95333081841239]
There is growing consensus that language model (LM) developers should not be the sole deciders of LM behavior.
We present Collective Constitutional AI (CCAI): a multi-stage process for sourcing and integrating public input into LMs.
We demonstrate the real-world practicality of this approach by creating what is, to our knowledge, the first LM fine-tuned with collectively sourced public input.
arXiv Detail & Related papers (2024-06-12T02:20:46Z) - CLOMO: Counterfactual Logical Modification with Large Language Models [109.60793869938534]
We introduce a novel task, Counterfactual Logical Modification (CLOMO), and a high-quality human-annotated benchmark.
In this task, LLMs must adeptly alter a given argumentative text to uphold a predetermined logical relationship.
We propose an innovative evaluation metric, the Self-Evaluation Score (SES), to directly evaluate the natural language output of LLMs.
arXiv Detail & Related papers (2023-11-29T08:29:54Z) - L2CEval: Evaluating Language-to-Code Generation Capabilities of Large
Language Models [102.00201523306986]
We present L2CEval, a systematic evaluation of the language-to-code generation capabilities of large language models (LLMs)
We analyze the factors that potentially affect their performance, such as model size, pretraining data, instruction tuning, and different prompting methods.
In addition to assessing model performance, we measure confidence calibration for the models and conduct human evaluations of the output programs.
arXiv Detail & Related papers (2023-09-29T17:57:00Z) - Evaluating and Explaining Large Language Models for Code Using Syntactic
Structures [74.93762031957883]
This paper introduces ASTxplainer, an explainability method specific to Large Language Models for code.
At its core, ASTxplainer provides an automated method for aligning token predictions with AST nodes.
We perform an empirical evaluation on 12 popular LLMs for code using a curated dataset of the most popular GitHub projects.
arXiv Detail & Related papers (2023-08-07T18:50:57Z) - The False Promise of Imitating Proprietary LLMs [158.65692029352584]
An emerging method to cheaply improve a weaker language model is to finetune it on outputs from a stronger model.
This approach looks to cheaply imitate the proprietary model's capabilities using a weaker open-source model.
We first finetune a series of LMs that imitate ChatGPT using varying base model sizes.
We then evaluate the models using crowd raters and canonical NLP benchmarks.
arXiv Detail & Related papers (2023-05-25T05:00:12Z) - Augmented Language Models: a Survey [55.965967655575454]
This survey reviews works in which language models (LMs) are augmented with reasoning skills and the ability to use tools.
We refer to them as Augmented Language Models (ALMs)
The missing token objective allows ALMs to learn to reason, use tools, and even act, while still performing standard natural language tasks.
arXiv Detail & Related papers (2023-02-15T18:25:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.