Embers of Autoregression: Understanding Large Language Models Through
the Problem They are Trained to Solve
- URL: http://arxiv.org/abs/2309.13638v1
- Date: Sun, 24 Sep 2023 13:35:28 GMT
- Title: Embers of Autoregression: Understanding Large Language Models Through
the Problem They are Trained to Solve
- Authors: R. Thomas McCoy, Shunyu Yao, Dan Friedman, Matthew Hardy, Thomas L.
Griffiths
- Abstract summary: We make predictions about the strategies that large language models will adopt to solve next-word prediction tasks.
We evaluate two LLMs on eleven tasks and find robust evidence that LLMs are influenced by probability.
We conclude that we should not evaluate LLMs as if they are humans but should instead treat them as a distinct type of system.
- Score: 21.55766758950951
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The widespread adoption of large language models (LLMs) makes it important to
recognize their strengths and limitations. We argue that in order to develop a
holistic understanding of these systems we need to consider the problem that
they were trained to solve: next-word prediction over Internet text. By
recognizing the pressures that this task exerts we can make predictions about
the strategies that LLMs will adopt, allowing us to reason about when they will
succeed or fail. This approach - which we call the teleological approach -
leads us to identify three factors that we hypothesize will influence LLM
accuracy: the probability of the task to be performed, the probability of the
target output, and the probability of the provided input. We predict that LLMs
will achieve higher accuracy when these probabilities are high than when they
are low - even in deterministic settings where probability should not matter.
To test our predictions, we evaluate two LLMs (GPT-3.5 and GPT-4) on eleven
tasks, and we find robust evidence that LLMs are influenced by probability in
the ways that we have hypothesized. In many cases, the experiments reveal
surprising failure modes. For instance, GPT-4's accuracy at decoding a simple
cipher is 51% when the output is a high-probability word sequence but only 13%
when it is low-probability. These results show that AI practitioners should be
careful about using LLMs in low-probability situations. More broadly, we
conclude that we should not evaluate LLMs as if they are humans but should
instead treat them as a distinct type of system - one that has been shaped by
its own particular set of pressures.
Related papers
- Predicting Emergent Capabilities by Finetuning [98.9684114851891]
We find that finetuning language models can shift the point in scaling at which emergence occurs towards less capable models.
We validate this approach using four standard NLP benchmarks.
We find that, in some cases, we can accurately predict whether models trained with up to 4x more compute have emerged.
arXiv Detail & Related papers (2024-11-25T01:48:09Z) - Uncertainty is Fragile: Manipulating Uncertainty in Large Language Models [79.76293901420146]
Large Language Models (LLMs) are employed across various high-stakes domains, where the reliability of their outputs is crucial.
Our research investigates the fragility of uncertainty estimation and explores potential attacks.
We demonstrate that an attacker can embed a backdoor in LLMs, which, when activated by a specific trigger in the input, manipulates the model's uncertainty without affecting the final output.
arXiv Detail & Related papers (2024-07-15T23:41:11Z) - Do Large Language Models Exhibit Cognitive Dissonance? Studying the Difference Between Revealed Beliefs and Stated Answers [13.644277507363036]
We investigate whether these abilities are measurable outside of tailored prompting and MCQ.
Our findings suggest that the Revealed Belief of LLMs significantly differs from their Stated Answer.
As text completion is at the core of LLMs, these results suggest that common evaluation methods may only provide a partial picture.
arXiv Detail & Related papers (2024-06-21T08:56:35Z) - Cycles of Thought: Measuring LLM Confidence through Stable Explanations [53.15438489398938]
Large language models (LLMs) can reach and even surpass human-level accuracy on a variety of benchmarks, but their overconfidence in incorrect responses is still a well-documented failure mode.
We propose a framework for measuring an LLM's uncertainty with respect to the distribution of generated explanations for an answer.
arXiv Detail & Related papers (2024-06-05T16:35:30Z) - Evaluating Uncertainty-based Failure Detection for Closed-Loop LLM Planners [10.746821861109176]
Large Language Models (LLMs) have witnessed remarkable performance as zero-shot task planners for robotic tasks.
However, the open-loop nature of previous works makes LLM-based planning error-prone and fragile.
In this work, we introduce a framework for closed-loop LLM-based planning called KnowLoop, backed by an uncertainty-based MLLMs failure detector.
arXiv Detail & Related papers (2024-06-01T12:52:06Z) - "I'm Not Sure, But...": Examining the Impact of Large Language Models' Uncertainty Expression on User Reliance and Trust [51.542856739181474]
We show how different natural language expressions of uncertainty impact participants' reliance, trust, and overall task performance.
We find that first-person expressions decrease participants' confidence in the system and tendency to agree with the system's answers, while increasing participants' accuracy.
Our findings suggest that using natural language expressions of uncertainty may be an effective approach for reducing overreliance on LLMs, but that the precise language used matters.
arXiv Detail & Related papers (2024-05-01T16:43:55Z) - Evaluation and Improvement of Fault Detection for Large Language Models [30.760472387136954]
This paper investigates the effectiveness of existing fault detection methods for large language models (LLMs)
We propose textbfMuCS, a prompt textbfMutation-based prediction textbfConfidence textbfSmoothing framework to boost the fault detection capability of existing methods.
arXiv Detail & Related papers (2024-04-14T07:06:12Z) - Making Pre-trained Language Models both Task-solvers and
Self-calibrators [52.98858650625623]
Pre-trained language models (PLMs) serve as backbones for various real-world systems.
Previous work shows that introducing an extra calibration task can mitigate this issue.
We propose a training algorithm LM-TOAST to tackle the challenges.
arXiv Detail & Related papers (2023-07-21T02:51:41Z) - Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs [60.61002524947733]
Previous confidence elicitation methods rely on white-box access to internal model information or model fine-tuning.
This leads to a growing need to explore the untapped area of black-box approaches for uncertainty estimation.
We define a systematic framework with three components: prompting strategies for eliciting verbalized confidence, sampling methods for generating multiple responses, and aggregation techniques for computing consistency.
arXiv Detail & Related papers (2023-06-22T17:31:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.