Latent Space Topology Evolution in Multilayer Perceptrons
- URL: http://arxiv.org/abs/2506.01569v1
- Date: Mon, 02 Jun 2025 11:51:53 GMT
- Title: Latent Space Topology Evolution in Multilayer Perceptrons
- Authors: Eduardo Paluzo-Hidalgo,
- Abstract summary: This paper introduces a framework for interpreting the internal representations of Multilayer Perceptrons (MLPs)<n>We construct a simplicial tower, a sequence of simplicial complexes connected by simplicial maps, that captures how data evolves across network layers.
- Score: 0.26107298043931204
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a topological framework for interpreting the internal representations of Multilayer Perceptrons (MLPs). We construct a simplicial tower, a sequence of simplicial complexes connected by simplicial maps, that captures how data topology evolves across network layers. Our approach enables bi-persistence analysis: layer persistence tracks topological features within each layer across scales, while MLP persistence reveals how these features transform through the network. We prove stability theorems for our topological descriptors and establish that linear separability in latent spaces is related to disconnected components in the nerve complexes. To make our framework practical, we develop a combinatorial algorithm for computing MLP persistence and introduce trajectory-based visualisations that track data flow through the network. Experiments on synthetic and real-world medical data demonstrate our method's ability to identify redundant layers, reveal critical topological transitions, and provide interpretable insights into how MLPs progressively organise data for classification.
Related papers
- Holes in Latent Space: Topological Signatures Under Adversarial Influence [1.193044160835091]
We propose persistent homology (PH), a tool from topological data analysis, to characterize multiscale latent space dynamics in language models.<n>We show that adversarial conditions consistently compress latent topologies, reducing structural diversity at smaller scales while amplifying dominant features at coarser ones.<n>We introduce a neuron-level PH framework that quantifies how information flows and transforms within and across layers.
arXiv Detail & Related papers (2025-05-26T18:31:49Z) - Global Convergence and Rich Feature Learning in $L$-Layer Infinite-Width Neural Networks under $μ$P Parametrization [66.03821840425539]
In this paper, we investigate the training dynamics of $L$-layer neural networks using the tensor gradient program (SGD) framework.<n>We show that SGD enables these networks to learn linearly independent features that substantially deviate from their initial values.<n>This rich feature space captures relevant data information and ensures that any convergent point of the training process is a global minimum.
arXiv Detail & Related papers (2025-03-12T17:33:13Z) - Persistent Topological Features in Large Language Models [0.6597195879147556]
We introduce persistence similarity, a new metric that quantifies the persistence and transformation of topological features.
Unlike traditional similarity measures, our approach captures the entire evolutionary trajectory of these features.
As a practical application, we leverage persistence similarity to identify and prune redundant layers.
arXiv Detail & Related papers (2024-10-14T19:46:23Z) - Going Beyond Linear Mode Connectivity: The Layerwise Linear Feature
Connectivity [62.11981948274508]
The study of LLFC transcends and advances our understanding of LMC by adopting a feature-learning perspective.
We provide comprehensive empirical evidence for LLFC across a wide range of settings, demonstrating that whenever two trained networks satisfy LMC, they also satisfy LLFC in nearly all the layers.
arXiv Detail & Related papers (2023-07-17T07:16:28Z) - Beyond Multilayer Perceptrons: Investigating Complex Topologies in
Neural Networks [0.12289361708127873]
We explore the impact of network topology on the approximation capabilities of artificial neural networks (ANNs)
We propose a novel methodology for constructing complex ANNs based on various topologies, including Barab'asi-Albert, ErdHos-R'enyi, Watts-Strogatz, and multilayer perceptrons (MLPs)
The constructed networks are evaluated on synthetic datasets generated from manifold learning generators, with varying levels of task difficulty and noise, and on real-world datasets from UCI.
arXiv Detail & Related papers (2023-03-31T09:48:16Z) - Persistence-based operators in machine learning [62.997667081978825]
We introduce a class of persistence-based neural network layers.
Persistence-based layers allow the users to easily inject knowledge about symmetries respected by the data, are equipped with learnable weights, and can be composed with state-of-the-art neural architectures.
arXiv Detail & Related papers (2022-12-28T18:03:41Z) - Simplicial Attention Networks [0.0]
We introduce a proper self-attention mechanism able to process data components at different layers.
We learn how to weight both upper and lower neighborhoods of the given topological domain in a totally task-oriented fashion.
The proposed approach compares favorably with other methods when applied to different (inductive and transductive) tasks.
arXiv Detail & Related papers (2022-03-14T20:47:31Z) - Activation Landscapes as a Topological Summary of Neural Network
Performance [0.0]
We study how data transforms as it passes through successive layers of a deep neural network (DNN)
We compute the persistent homology of the activation data for each layer of the network and summarize this information using persistence landscapes.
The resulting feature map provides both an informative visual- ization of the network and a kernel for statistical analysis and machine learning.
arXiv Detail & Related papers (2021-10-19T17:45:36Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Inter-layer Information Similarity Assessment of Deep Neural Networks
Via Topological Similarity and Persistence Analysis of Data Neighbour
Dynamics [93.4221402881609]
The quantitative analysis of information structure through a deep neural network (DNN) can unveil new insights into the theoretical performance of DNN architectures.
Inspired by both LS and ID strategies for quantitative information structure analysis, we introduce two novel complimentary methods for inter-layer information similarity assessment.
We demonstrate their efficacy in this study by performing analysis on a deep convolutional neural network architecture on image data.
arXiv Detail & Related papers (2020-12-07T15:34:58Z) - Dual-constrained Deep Semi-Supervised Coupled Factorization Network with
Enriched Prior [80.5637175255349]
We propose a new enriched prior based Dual-constrained Deep Semi-Supervised Coupled Factorization Network, called DS2CF-Net.
To ex-tract hidden deep features, DS2CF-Net is modeled as a deep-structure and geometrical structure-constrained neural network.
Our network can obtain state-of-the-art performance for representation learning and clustering.
arXiv Detail & Related papers (2020-09-08T13:10:21Z) - PLLay: Efficient Topological Layer based on Persistence Landscapes [24.222495922671442]
PLLay is a novel topological layer for general deep learning models based on persistence landscapes.
We show differentiability with respect to layer inputs, for a general persistent homology with arbitrary filtration.
arXiv Detail & Related papers (2020-02-07T13:34:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.