SAGE: An Agentic Explainer Framework for Interpreting SAE Features in Language Models
- URL: http://arxiv.org/abs/2511.20820v1
- Date: Tue, 25 Nov 2025 20:14:29 GMT
- Title: SAGE: An Agentic Explainer Framework for Interpreting SAE Features in Language Models
- Authors: Jiaojiao Han, Wujiang Xu, Mingyu Jin, Mengnan Du,
- Abstract summary: Large language models (LLMs) have achieved remarkable progress, yet their internal mechanisms remain largely opaque.<n>Sparse autoencoders (SAEs) have emerged as a promising tool for decomposing LLM representations into more interpretable features.<n>We propose SAGE (SAE AGentic Explainer), an agent-based framework that recasts feature interpretation from a passive, single-pass generation task into an explanationdriven process.
- Score: 37.102387880457535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) have achieved remarkable progress, yet their internal mechanisms remain largely opaque, posing a significant challenge to their safe and reliable deployment. Sparse autoencoders (SAEs) have emerged as a promising tool for decomposing LLM representations into more interpretable features, but explaining the features captured by SAEs remains a challenging task. In this work, we propose SAGE (SAE AGentic Explainer), an agent-based framework that recasts feature interpretation from a passive, single-pass generation task into an active, explanation-driven process. SAGE implements a rigorous methodology by systematically formulating multiple explanations for each feature, designing targeted experiments to test them, and iteratively refining explanations based on empirical activation feedback. Experiments on features from SAEs of diverse language models demonstrate that SAGE produces explanations with significantly higher generative and predictive accuracy compared to state-of-the-art baselines.an agent-based framework that recasts feature interpretation from a passive, single-pass generation task into an active, explanationdriven process. SAGE implements a rigorous methodology by systematically formulating multiple explanations for each feature, designing targeted experiments to test them, and iteratively refining explanations based on empirical activation feedback. Experiments on features from SAEs of diverse language models demonstrate that SAGE produces explanations with significantly higher generative and predictive accuracy compared to state-of-the-art baselines.
Related papers
- Addressing Explainability of Generative AI using SMILE (Statistical Model-agnostic Interpretability with Local Explanations) [0.0]
This thesis introduces gSMILE, a unified framework for the explainability of generative models.<n>gSMILE employs controlled perturbations of textual input, Wasserstein distance metrics, and weighted surrogate modelling.<n>It produces robust, human-aligned attributions and generalises effectively across state-of-the-art generative models.
arXiv Detail & Related papers (2026-02-01T12:54:51Z) - ProtSAE: Disentangling and Interpreting Protein Language Models via Semantically-Guided Sparse Autoencoders [30.219733023958188]
Sparse Autoencoder (SAE) has emerged as a powerful tool for mechanistic interpretability of large language models.<n>We propose a semantically-guided SAE, called ProtSAE.<n>We show that ProtSAE learns more biologically relevant and interpretable hidden features compared to previous methods.
arXiv Detail & Related papers (2025-08-26T11:20:31Z) - Can LLM-Generated Textual Explanations Enhance Model Classification Performance? An Empirical Study [11.117380681219295]
We present an automated framework to generate high-quality textual explanations.<n>We rigorously assess the quality of these explanations using a comprehensive suite of Natural Language Generation (NLG) metrics.<n>Our experiments demonstrate that automated explanations exhibit highly competitive effectiveness compared to human-annotated explanations.
arXiv Detail & Related papers (2025-08-13T12:59:08Z) - MetaExplainer: A Framework to Generate Multi-Type User-Centered Explanations for AI Systems [1.9811010456089264]
We introduce MetaExplainer, a neuro-symbolic framework designed to generate user-centered explanations.<n>Our approach employs a three-stage process: first, we decompose user questions into machine-readable formats using state-of-the-art large language models (LLM); second, we delegate the task of generating system recommendations to model explainer methods; and finally, we synthesize natural language explanations that summarize the explainer outputs.
arXiv Detail & Related papers (2025-08-01T04:01:40Z) - IDA-Bench: Evaluating LLMs on Interactive Guided Data Analysis [60.32962597618861]
IDA-Bench is a novel benchmark evaluating large language models in multi-round interactive scenarios.<n>Agent performance is judged by comparing its final numerical output to the human-derived baseline.<n>Even state-of-the-art coding agents (like Claude-3.7-thinking) succeed on 50% of the tasks, highlighting limitations not evident in single-turn tests.
arXiv Detail & Related papers (2025-05-23T09:37:52Z) - From Feature Importance to Natural Language Explanations Using LLMs with RAG [4.204990010424084]
We introduce traceable question-answering, leveraging an external knowledge repository to inform responses of Large Language Models (LLMs)
This knowledge repository comprises contextual details regarding the model's output, containing high-level features, feature importance, and alternative probabilities.
We integrate four key characteristics - social, causal, selective, and contrastive - drawn from social science research on human explanations into a single-shot prompt, guiding the response generation process.
arXiv Detail & Related papers (2024-07-30T17:27:20Z) - LangSuitE: Planning, Controlling and Interacting with Large Language Models in Embodied Text Environments [70.91258869156353]
We introduce LangSuitE, a versatile and simulation-free testbed featuring 6 representative embodied tasks in textual embodied worlds.
Compared with previous LLM-based testbeds, LangSuitE offers adaptability to diverse environments without multiple simulation engines.
We devise a novel chain-of-thought (CoT) schema, EmMem, which summarizes embodied states w.r.t. history information.
arXiv Detail & Related papers (2024-06-24T03:36:29Z) - Diffexplainer: Towards Cross-modal Global Explanations with Diffusion Models [51.21351775178525]
DiffExplainer is a novel framework that, leveraging language-vision models, enables multimodal global explainability.
It employs diffusion models conditioned on optimized text prompts, synthesizing images that maximize class outputs.
The analysis of generated visual descriptions allows for automatic identification of biases and spurious features.
arXiv Detail & Related papers (2024-04-03T10:11:22Z) - Explanation-aware Soft Ensemble Empowers Large Language Model In-context
Learning [50.00090601424348]
Large language models (LLMs) have shown remarkable capabilities in various natural language understanding tasks.
We propose EASE, an Explanation-Aware Soft Ensemble framework to empower in-context learning with LLMs.
arXiv Detail & Related papers (2023-11-13T06:13:38Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.