Is Bigger and Deeper Always Better? Probing LLaMA Across Scales and
Layers
- URL: http://arxiv.org/abs/2312.04333v4
- Date: Tue, 9 Jan 2024 07:33:07 GMT
- Title: Is Bigger and Deeper Always Better? Probing LLaMA Across Scales and
Layers
- Authors: Nuo Chen, Ning Wu, Shining Liang, Ming Gong, Linjun Shou, Dongmei
Zhang, Jia Li
- Abstract summary: This paper focuses on LLaMA, a prominent open-source foundational model in natural language processing.
Instead of assessing LLaMA through its generative output, we design multiple-choice tasks to probe its intrinsic understanding.
We unveil several key and uncommon findings based on the designed probing tasks.
- Score: 73.28459749681879
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents an in-depth analysis of Large Language Models (LLMs),
focusing on LLaMA, a prominent open-source foundational model in natural
language processing. Instead of assessing LLaMA through its generative output,
we design multiple-choice tasks to probe its intrinsic understanding in
high-order tasks such as reasoning and computation. We examine the model
horizontally, comparing different sizes, and vertically, assessing different
layers. We unveil several key and uncommon findings based on the designed
probing tasks: (1) Horizontally, enlarging model sizes almost could not
automatically impart additional knowledge or computational prowess. Instead, it
can enhance reasoning abilities, especially in math problem solving, and helps
reduce hallucinations, but only beyond certain size thresholds; (2) In vertical
analysis, the lower layers of LLaMA lack substantial arithmetic and factual
knowledge, showcasing logical thinking, multilingual and recognitive abilities,
with top layers housing most computational power and real-world knowledge.
Related papers
- Hopping Too Late: Exploring the Limitations of Large Language Models on Multi-Hop Queries [39.438904598467154]
We study how large language models (LLMs) solve complex multi-step problems.
understanding how the latent step is computed internally is key to understanding the overall computation.
We propose a novel "back-patching" analysis method whereby a hidden representation from a later layer is patched back to an earlier layer.
arXiv Detail & Related papers (2024-06-18T16:44:13Z) - Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers? [57.04803703952721]
Large language models (LLMs) have shown remarkable performances across a wide range of tasks.
However, the mechanisms by which these models encode tasks of varying complexities remain poorly understood.
We introduce the idea of Concept Depth'' to suggest that more complex concepts are typically acquired in deeper layers.
arXiv Detail & Related papers (2024-04-10T14:56:40Z) - FAC$^2$E: Better Understanding Large Language Model Capabilities by Dissociating Language and Cognition [56.76951887823882]
Large language models (LLMs) are primarily evaluated by overall performance on various text understanding and generation tasks.
We present FAC$2$E, a framework for Fine-grAined and Cognition-grounded LLMs' Capability Evaluation.
arXiv Detail & Related papers (2024-02-29T21:05:37Z) - How Large Language Models Encode Context Knowledge? A Layer-Wise Probing
Study [27.23388511249688]
This paper investigates the layer-wise capability of large language models to encode knowledge.
We leverage the powerful generative capability of ChatGPT to construct probing datasets.
Experiments on conflicting and newly acquired knowledge show that LLMs prefer to encode more context knowledge in the upper layers.
arXiv Detail & Related papers (2024-02-25T11:15:42Z) - TinyLLM: Learning a Small Student from Multiple Large Language Models [23.736611338497244]
TinyLLM is a new knowledge distillation paradigm to learn a small student LLM from multiple large teacher LLMs.
We introduce an in-context example generator and a teacher-forcing Chain-of-Thought strategy to ensure that the rationales are accurate and grounded in contextually appropriate scenarios.
arXiv Detail & Related papers (2024-02-07T06:48:24Z) - LLaMA Pro: Progressive LLaMA with Block Expansion [66.39213657252279]
We propose a new post-pretraining method for Large Language Models (LLMs) with an expansion of Transformer blocks.
We tune the expanded blocks using only new corpus, efficiently and effectively improving the model's knowledge without catastrophic forgetting.
In this paper, we experiment on the corpus of code and math, yielding LLaMA Pro-8.3B, a versatile foundation model from LLaMA2-7B.
arXiv Detail & Related papers (2024-01-04T18:59:12Z) - GeomVerse: A Systematic Evaluation of Large Models for Geometric
Reasoning [17.61621287003562]
We evaluate vision language models (VLMs) along various axes through the lens of geometry problems.
We procedurally create a synthetic dataset of geometry questions with controllable difficulty levels along multiple axes.
The empirical results obtained using our benchmark for state-of-the-art VLMs indicate that these models are not as capable in subjects like geometry.
arXiv Detail & Related papers (2023-12-19T15:25:39Z) - CLadder: Assessing Causal Reasoning in Language Models [82.8719238178569]
We investigate whether large language models (LLMs) can coherently reason about causality.
We propose a new NLP task, causal inference in natural language, inspired by the "causal inference engine" postulated by Judea Pearl et al.
arXiv Detail & Related papers (2023-12-07T15:12:12Z) - DoLa: Decoding by Contrasting Layers Improves Factuality in Large
Language Models [79.01926242857613]
Large language models (LLMs) are prone to hallucinations, generating content that deviates from facts seen during pretraining.
We propose a simple decoding strategy for reducing hallucinations with pretrained LLMs.
We find that this Decoding by Contrasting Layers (DoLa) approach is able to better surface factual knowledge and reduce the generation of incorrect facts.
arXiv Detail & Related papers (2023-09-07T17:45:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.