Llama Scope: Extracting Millions of Features from Llama-3.1-8B with Sparse Autoencoders
- URL: http://arxiv.org/abs/2410.20526v1
- Date: Sun, 27 Oct 2024 17:33:49 GMT
- Title: Llama Scope: Extracting Millions of Features from Llama-3.1-8B with Sparse Autoencoders
- Authors: Zhengfu He, Wentao Shu, Xuyang Ge, Lingjie Chen, Junxuan Wang, Yunhua Zhou, Frances Liu, Qipeng Guo, Xuanjing Huang, Zuxuan Wu, Yu-Gang Jiang, Xipeng Qiu,
- Abstract summary: Sparse Autoencoders (SAEs) have emerged as a powerful unsupervised method for extracting sparse representations from language models.
We introduce a suite of 256 SAEs, trained on each layer and sublayer of the Llama-3.1-8B-Base model, with 32K and 128K features.
We assess the generalizability of SAEs trained on base models to longer contexts and fine-tuned models.
- Score: 115.34050914216665
- License:
- Abstract: Sparse Autoencoders (SAEs) have emerged as a powerful unsupervised method for extracting sparse representations from language models, yet scalable training remains a significant challenge. We introduce a suite of 256 SAEs, trained on each layer and sublayer of the Llama-3.1-8B-Base model, with 32K and 128K features. Modifications to a state-of-the-art SAE variant, Top-K SAEs, are evaluated across multiple dimensions. In particular, we assess the generalizability of SAEs trained on base models to longer contexts and fine-tuned models. Additionally, we analyze the geometry of learned SAE latents, confirming that \emph{feature splitting} enables the discovery of new features. The Llama Scope SAE checkpoints are publicly available at~\url{https://huggingface.co/fnlp/Llama-Scope}, alongside our scalable training, interpretation, and visualization tools at \url{https://github.com/OpenMOSS/Language-Model-SAEs}. These contributions aim to advance the open-source Sparse Autoencoder ecosystem and support mechanistic interpretability research by reducing the need for redundant SAE training.
Related papers
- Sparse Autoencoders Do Not Find Canonical Units of Analysis [6.0188420022822955]
A common goal of mechanistic interpretability is to decompose the activations of neural networks into features.
Sparse autoencoders (SAEs) are a popular method for finding these features.
We use two techniques: SAE stitching to show they are incomplete, and meta-SAEs to show they are not atomic.
arXiv Detail & Related papers (2025-02-07T12:33:08Z) - Low-Rank Adapting Models for Sparse Autoencoders [6.932760557251821]
We use low-rank adaptation (LoRA) to finetune the language model itself around a previously trained SAE.
We analyze our method across SAE sparsity, SAE width, language model size, LoRA rank, and model layer on the Gemma Scope family of SAEs.
arXiv Detail & Related papers (2025-01-31T18:59:16Z) - Sparse Autoencoders Trained on the Same Data Learn Different Features [0.7234862895932991]
Sparse autoencoders (SAEs) are a useful tool for uncovering human-interpretable features in large language models.
Our research shows that SAEs trained on the same model and data, differing only in the random seed used to initialize their weights, identify different sets of features.
arXiv Detail & Related papers (2025-01-28T01:24:16Z) - SPaR: Self-Play with Tree-Search Refinement to Improve Instruction-Following in Large Language Models [88.29990536278167]
We introduce SPaR, a self-play framework integrating tree-search self-refinement to yield valid and comparable preference pairs free from distractions.
Our experiments show that a LLaMA3-8B model, trained over three iterations guided by SPaR, surpasses GPT-4-Turbo on the IFEval benchmark without losing general capabilities.
arXiv Detail & Related papers (2024-12-16T09:47:43Z) - Automatically Interpreting Millions of Features in Large Language Models [1.8035046415192353]
sparse autoencoders (SAEs) can be used to transform activations into a higher-dimensional latent space.
We build an open-source pipeline to generate and evaluate natural language explanations for SAE features.
Our large-scale analysis confirms that SAE latents are indeed much more interpretable than neurons.
arXiv Detail & Related papers (2024-10-17T17:56:01Z) - Measuring Progress in Dictionary Learning for Language Model Interpretability with Board Game Models [18.77400885091398]
We propose to measure progress in interpretable dictionary learning by working in the setting of LMs trained on chess and Othello transcripts.
We introduce a new SAE training technique, $textitp-annealing$, which improves performance on prior unsupervised metrics as well as our new metrics.
arXiv Detail & Related papers (2024-07-31T18:45:13Z) - Segment and Caption Anything [126.20201216616137]
We propose a method to efficiently equip the Segment Anything Model with the ability to generate regional captions.
By introducing a lightweight query-based feature mixer, we align the region-specific features with the embedding space of language models for later caption generation.
We conduct extensive experiments to demonstrate the superiority of our method and validate each design choice.
arXiv Detail & Related papers (2023-12-01T19:00:17Z) - Interpretability at Scale: Identifying Causal Mechanisms in Alpaca [62.65877150123775]
We use Boundless DAS to efficiently search for interpretable causal structure in large language models while they follow instructions.
Our findings mark a first step toward faithfully understanding the inner-workings of our ever-growing and most widely deployed language models.
arXiv Detail & Related papers (2023-05-15T17:15:40Z) - SdAE: Self-distillated Masked Autoencoder [95.3684955370897]
Self-distillated masked AutoEncoder network SdAE is proposed in this paper.
With only 300 epochs pre-training, a vanilla ViT-Base model achieves an 84.1% fine-tuning accuracy on ImageNet-1k classification.
arXiv Detail & Related papers (2022-07-31T15:07:25Z) - Multi-Modal Zero-Shot Sign Language Recognition [51.07720650677784]
We propose a multi-modal Zero-Shot Sign Language Recognition model.
A Transformer-based model along with a C3D model is used for hand detection and deep features extraction.
A semantic space is used to map the visual features to the lingual embedding of the class labels.
arXiv Detail & Related papers (2021-09-02T09:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.