Sparsification and Reconstruction from the Perspective of Representation Geometry
- URL: http://arxiv.org/abs/2505.22506v1
- Date: Wed, 28 May 2025 15:54:33 GMT
- Title: Sparsification and Reconstruction from the Perspective of Representation Geometry
- Authors: Wenjie Sun, Bingzhe Wu, Zhile Yang, Chengke Wu,
- Abstract summary: Sparse Autoencoders (SAEs) have emerged as a predominant tool in mechanistic interpretability.<n>This study explains the principles of sparsity from the perspective of representational geometry.<n>Specifically emphasizes the necessity of understanding representations and incorporating representational constraints.
- Score: 10.834177456685538
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sparse Autoencoders (SAEs) have emerged as a predominant tool in mechanistic interpretability, aiming to identify interpretable monosemantic features. However, how does sparse encoding organize the representations of activation vector from language models? What is the relationship between this organizational paradigm and feature disentanglement as well as reconstruction performance? To address these questions, we propose the SAEMA, which validates the stratified structure of the representation by observing the variability of the rank of the symmetric semipositive definite (SSPD) matrix corresponding to the modal tensor unfolded along the latent tensor with the level of noise added to the residual stream. To systematically investigate how sparse encoding alters representational structures, we define local and global representations, demonstrating that they amplify inter-feature distinctions by merging similar semantic features and introducing additional dimensionality. Furthermore, we intervene the global representation from an optimization perspective, proving a significant causal relationship between their separability and the reconstruction performance. This study explains the principles of sparsity from the perspective of representational geometry and demonstrates the impact of changes in representational structure on reconstruction performance. Particularly emphasizes the necessity of understanding representations and incorporating representational constraints, providing empirical references for developing new interpretable tools and improving SAEs. The code is available at \hyperlink{https://github.com/wenjie1835/SAERepGeo}{https://github.com/wenjie1835/SAERepGeo}.
Related papers
- From Flat to Hierarchical: Extracting Sparse Representations with Matching Pursuit [16.996218963146788]
We show that MP-SAE unrolls its encoder into a sequence of residual-guided steps, allowing it to capture hierarchical and nonlinearly accessible features.<n>We also show that the sequential encoder principle of MP-SAE affords an additional benefit of adaptive sparsity at inference time.
arXiv Detail & Related papers (2025-06-03T17:24:55Z) - "Principal Components" Enable A New Language of Images [79.45806370905775]
We introduce a novel visual tokenization framework that embeds a provable PCA-like structure into the latent token space.<n>Our approach achieves state-of-the-art reconstruction performance and enables better interpretability to align with the human vision system.
arXiv Detail & Related papers (2025-03-11T17:59:41Z) - Structured Convergence in Large Language Model Representations via Hierarchical Latent Space Folding [0.0]
Token representations in high-dimensional latent spaces often exhibit redundancy, limiting computational efficiency and reducing structural coherence across model layers.<n>This paper introduces a structured transformation mechanism that enforces a multi-scale organization within learned embeddings.<n> Empirical evaluation demonstrates a reduction in representational variance across layers, contributing to more stable perplexity distributions and enhancing predictive confidence in text generation.
arXiv Detail & Related papers (2025-02-13T04:01:54Z) - Hierarchical Contextual Manifold Alignment for Structuring Latent Representations in Large Language Models [7.798982346197703]
The organization of latent token representations plays a crucial role in determining the stability, generalization, and contextual consistency of language models.<n>A hierarchical alignment method was introduced to token embeddings without altering core model weights.<n> Experimental evaluations demonstrated improvements in rare token retrieval, adversarial, and long-range dependency tracking.
arXiv Detail & Related papers (2025-02-06T04:01:27Z) - Fully Distributed, Flexible Compositional Visual Representations via Soft Tensor Products [13.306125510884563]
We introduce Soft TPR, a representational form that encodes compositional structure in an inherently distributed, flexible manner.<n>We show that Soft TPR consistently outperforms conventional disentanglement alternatives.<n>These findings highlight the promise of a distributed and flexible approach to representing compositional structure.
arXiv Detail & Related papers (2024-12-05T23:47:58Z) - Progressive Semantic-Visual Mutual Adaption for Generalized Zero-Shot
Learning [74.48337375174297]
Generalized Zero-Shot Learning (GZSL) identifies unseen categories by knowledge transferred from the seen domain.
We deploy the dual semantic-visual transformer module (DSVTM) to progressively model the correspondences between prototypes and visual features.
DSVTM devises an instance-motivated semantic encoder that learns instance-centric prototypes to adapt to different images, enabling the recast of the unmatched semantic-visual pair into the matched one.
arXiv Detail & Related papers (2023-03-27T15:21:43Z) - Self-Supervised Learning Disentangled Group Representation as Feature [82.07737719232972]
We show that existing Self-Supervised Learning (SSL) only disentangles simple augmentation features such as rotation and colorization.
We propose an iterative SSL algorithm: Iterative Partition-based Invariant Risk Minimization (IP-IRM)
We prove that IP-IRM converges to a fully disentangled representation and show its effectiveness on various benchmarks.
arXiv Detail & Related papers (2021-10-28T16:12:33Z) - R$^3$Net:Relation-embedded Representation Reconstruction Network for
Change Captioning [30.962341503501964]
Change captioning is to use a natural language sentence to describe the fine-grained disagreement between two similar images.
Viewpoint change is the most typical distractor in this task, because it changes the scale and location of the objects and overwhelms the representation of real change.
We propose a Relation-embedded Representation Reconstruction Network (R$3$Net) to explicitly distinguish the real change from the large amount of clutter and irrelevant changes.
arXiv Detail & Related papers (2021-10-20T00:57:39Z) - Image Synthesis via Semantic Composition [74.68191130898805]
We present a novel approach to synthesize realistic images based on their semantic layouts.
It hypothesizes that for objects with similar appearance, they share similar representation.
Our method establishes dependencies between regions according to their appearance correlation, yielding both spatially variant and associated representations.
arXiv Detail & Related papers (2021-09-15T02:26:07Z) - Anti-aliasing Semantic Reconstruction for Few-Shot Semantic Segmentation [66.85202434812942]
We reformulate few-shot segmentation as a semantic reconstruction problem.
We convert base class features into a series of basis vectors which span a class-level semantic space for novel class reconstruction.
Our proposed approach, referred to as anti-aliasing semantic reconstruction (ASR), provides a systematic yet interpretable solution for few-shot learning problems.
arXiv Detail & Related papers (2021-06-01T02:17:36Z) - Unsupervised Distillation of Syntactic Information from Contextualized
Word Representations [62.230491683411536]
We tackle the task of unsupervised disentanglement between semantics and structure in neural language representations.
To this end, we automatically generate groups of sentences which are structurally similar but semantically different.
We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics.
arXiv Detail & Related papers (2020-10-11T15:13:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.