Arithmetic Without Algorithms: Language Models Solve Math With a Bag of Heuristics
- URL: http://arxiv.org/abs/2410.21272v1
- Date: Mon, 28 Oct 2024 17:59:06 GMT
- Title: Arithmetic Without Algorithms: Language Models Solve Math With a Bag of Heuristics
- Authors: Yaniv Nikankin, Anja Reusch, Aaron Mueller, Yonatan Belinkov,
- Abstract summary: We show that large language models (LLMs) perform arithmetic using neither robust algorithms nor memorization.
Overall, our experimental results show that LLMs perform arithmetic using neither robust algorithms nor memorization.
- Score: 43.86518549293703
- License:
- Abstract: Do large language models (LLMs) solve reasoning tasks by learning robust generalizable algorithms, or do they memorize training data? To investigate this question, we use arithmetic reasoning as a representative task. Using causal analysis, we identify a subset of the model (a circuit) that explains most of the model's behavior for basic arithmetic logic and examine its functionality. By zooming in on the level of individual circuit neurons, we discover a sparse set of important neurons that implement simple heuristics. Each heuristic identifies a numerical input pattern and outputs corresponding answers. We hypothesize that the combination of these heuristic neurons is the mechanism used to produce correct arithmetic answers. To test this, we categorize each neuron into several heuristic types-such as neurons that activate when an operand falls within a certain range-and find that the unordered combination of these heuristic types is the mechanism that explains most of the model's accuracy on arithmetic prompts. Finally, we demonstrate that this mechanism appears as the main source of arithmetic accuracy early in training. Overall, our experimental results across several LLMs show that LLMs perform arithmetic using neither robust algorithms nor memorization; rather, they rely on a "bag of heuristics".
Related papers
- No One-Size-Fits-All Neurons: Task-based Neurons for Artificial Neural Networks [25.30801109401654]
Since the human brain is a task-based neuron user, can the artificial network design go from the task-based architecture design to the task-based neuron design?
We propose a two-step framework for prototyping task-based neurons.
Experiments show that the proposed task-based neuron design is not only feasible but also delivers competitive performance over other state-of-the-art models.
arXiv Detail & Related papers (2024-05-03T09:12:46Z) - Inferring Inference [7.11780383076327]
We develop a framework for inferring canonical distributed computations from large-scale neural activity patterns.
We simulate recordings for a model brain that implicitly implements an approximate inference algorithm on a probabilistic graphical model.
Overall, this framework provides a new tool for discovering interpretable structure in neural recordings.
arXiv Detail & Related papers (2023-10-04T22:12:11Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - Supervised Parameter Estimation of Neuron Populations from Multiple
Firing Events [3.2826301276626273]
We study an automatic approach of learning the parameters of neuron populations from a training set consisting of pairs of spiking series and parameter labels via supervised learning.
We simulate many neuronal populations at computation at different parameter settings using a neuron model.
We then compare their performance against classical approaches including a genetic search, Bayesian sequential estimation, and a random walk approximate model.
arXiv Detail & Related papers (2022-10-02T03:17:05Z) - Recognizing and Verifying Mathematical Equations using Multiplicative
Differential Neural Units [86.9207811656179]
We show that memory-augmented neural networks (NNs) can achieve higher-order, memory-augmented extrapolation, stable performance, and faster convergence.
Our models achieve a 1.53% average improvement over current state-of-the-art methods in equation verification and achieve a 2.22% Top-1 average accuracy and 2.96% Top-5 average accuracy for equation completion.
arXiv Detail & Related papers (2021-04-07T03:50:11Z) - Estimating Multiplicative Relations in Neural Networks [0.0]
We will use properties of logarithmic functions to propose a pair of activation functions which can translate products into linear expression and learn using backpropagation.
We will try to generalize this approach for some complex arithmetic functions and test the accuracy on a disjoint distribution with the training set.
arXiv Detail & Related papers (2020-10-28T14:28:24Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z) - Machine Number Sense: A Dataset of Visual Arithmetic Problems for
Abstract and Relational Reasoning [95.18337034090648]
We propose a dataset, Machine Number Sense (MNS), consisting of visual arithmetic problems automatically generated using a grammar model--And-Or Graph (AOG)
These visual arithmetic problems are in the form of geometric figures.
We benchmark the MNS dataset using four predominant neural network models as baselines in this visual reasoning task.
arXiv Detail & Related papers (2020-04-25T17:14:58Z) - iNALU: Improved Neural Arithmetic Logic Unit [2.331160520377439]
The recently proposed Neural Arithmetic Logic Unit (NALU) is a novel neural architecture which is able to explicitly represent the mathematical relationships by the units of the network to learn operations such as summation, subtraction or multiplication.
We show that our model solves stability issues and outperforms the original NALU model in means of arithmetic precision and convergence.
arXiv Detail & Related papers (2020-03-17T10:37:22Z) - Neural Arithmetic Units [84.65228064780744]
Neural networks can approximate complex functions, but they struggle to perform exact arithmetic operations over real numbers.
We present two new neural network components: the Neural Addition Unit (NAU), which can learn exact addition and subtraction, and the Neural multiplication Unit (NMU), which can multiply subsets of a vector.
Our proposed units NAU and NMU, compared with previous neural units, converge more consistently, have fewer parameters, learn faster, can converge for larger hidden sizes, obtain sparse and meaningful weights, and can extrapolate to negative and small values.
arXiv Detail & Related papers (2020-01-14T19:35:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.