Emergent Linear Representations in World Models of Self-Supervised
Sequence Models
- URL: http://arxiv.org/abs/2309.00941v2
- Date: Thu, 7 Sep 2023 20:36:48 GMT
- Title: Emergent Linear Representations in World Models of Self-Supervised
Sequence Models
- Authors: Neel Nanda, Andrew Lee, Martin Wattenberg
- Abstract summary: Othello-playing neural network learned nonlinear models of the board state.
We show that probing for "my colour" vs. "opponent's colour" may be a simple yet powerful way to interpret the model's internal state.
- Score: 5.712566125397807
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: How do sequence models represent their decision-making process? Prior work
suggests that Othello-playing neural network learned nonlinear models of the
board state (Li et al., 2023). In this work, we provide evidence of a closely
related linear representation of the board. In particular, we show that probing
for "my colour" vs. "opponent's colour" may be a simple yet powerful way to
interpret the model's internal state. This precise understanding of the
internal representations allows us to control the model's behaviour with simple
vector arithmetic. Linear representations enable significant interpretability
progress, which we demonstrate with further exploration of how the world model
is computed.
Related papers
- Are Linear Regression Models White Box and Interpretable? [0.0]
Explainable artificial intelligence (XAI) is a set of tools and algorithms that applied or embedded to machine learning models to understand and interpret the models.
Simple models including linear regression are easy to implement, has less computational complexity and easy to visualize the output.
arXiv Detail & Related papers (2024-07-16T21:05:51Z) - Towards Compositional Interpretability for XAI [3.3768167170511587]
We present an approach to defining AI models and their interpretability based on category theory.
We compare a wide range of AI models as compositional models.
We find that what makes the standard 'intrinsically interpretable' models so transparent is brought out most clearly diagrammatically.
arXiv Detail & Related papers (2024-06-25T14:27:03Z) - On the Origins of Linear Representations in Large Language Models [51.88404605700344]
We introduce a simple latent variable model to formalize the concept dynamics of the next token prediction.
Experiments show that linear representations emerge when learning from data matching the latent variable model.
We additionally confirm some predictions of the theory using the LLaMA-2 large language model.
arXiv Detail & Related papers (2024-03-06T17:17:36Z) - Meaning Representations from Trajectories in Autoregressive Models [106.63181745054571]
We propose to extract meaning representations from autoregressive language models by considering the distribution of all possible trajectories extending an input text.
This strategy is prompt-free, does not require fine-tuning, and is applicable to any pre-trained autoregressive model.
We empirically show that the representations obtained from large models align well with human annotations, outperform other zero-shot and prompt-free methods on semantic similarity tasks, and can be used to solve more complex entailment and containment tasks that standard embeddings cannot handle.
arXiv Detail & Related papers (2023-10-23T04:35:58Z) - Globally Interpretable Graph Learning via Distribution Matching [12.885580925389352]
We aim to answer an important question that is not yet well studied: how to provide a global interpretation for the graph learning procedure?
We formulate this problem as globally interpretable graph learning, which targets on distilling high-level and human-intelligible patterns that dominate the learning procedure.
We propose a novel model fidelity metric, tailored for evaluating the fidelity of the resulting model trained on interpretations.
arXiv Detail & Related papers (2023-06-18T00:50:36Z) - Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task [75.35278593566068]
Language models show a surprising range of capabilities, but the source of their apparent competence is unclear.
Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see?
We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello.
arXiv Detail & Related papers (2022-10-24T16:29:55Z) - Fair Interpretable Representation Learning with Correction Vectors [60.0806628713968]
We propose a new framework for fair representation learning that is centered around the learning of "correction vectors"
We show experimentally that several fair representation learning models constrained in such a way do not exhibit losses in ranking or classification performance.
arXiv Detail & Related papers (2022-02-07T11:19:23Z) - Recurrence-Aware Long-Term Cognitive Network for Explainable Pattern
Classification [0.0]
We propose an LTCN-based model for interpretable pattern classification of structured data.
Our method brings its own mechanism for providing explanations by quantifying the relevance of each feature in the decision process.
Our interpretable model obtains competitive performance when compared to the state-of-the-art white and black boxes.
arXiv Detail & Related papers (2021-07-07T18:14:50Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.