k* Distribution: Evaluating the Latent Space of Deep Neural Networks using Local Neighborhood Analysis
- URL: http://arxiv.org/abs/2312.04024v2
- Date: Sat, 17 Aug 2024 00:43:08 GMT
- Title: k* Distribution: Evaluating the Latent Space of Deep Neural Networks using Local Neighborhood Analysis
- Authors: Shashank Kotyan, Tatsuya Ueda, Danilo Vasconcellos Vargas,
- Abstract summary: We introduce the k*distribution and its corresponding visualization technique.
This method uses local neighborhood analysis to guarantee the preservation of the structure of sample distributions.
Experiments show that the distribution of samples within the network's learned latent space significantly varies depending on the class.
- Score: 7.742297876120561
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most examinations of neural networks' learned latent spaces typically employ dimensionality reduction techniques such as t-SNE or UMAP. These methods distort the local neighborhood in the visualization, making it hard to distinguish the structure of a subset of samples in the latent space. In response to this challenge, we introduce the {k*~distribution} and its corresponding visualization technique This method uses local neighborhood analysis to guarantee the preservation of the structure of sample distributions for individual classes within the subset of the learned latent space. This facilitates easy comparison of different k*~distributions, enabling analysis of how various classes are processed by the same neural network. Our study reveals three distinct distributions of samples within the learned latent space subset: a) Fractured, b) Overlapped, and c) Clustered, providing a more profound understanding of existing contemporary visualizations. Experiments show that the distribution of samples within the network's learned latent space significantly varies depending on the class. Furthermore, we illustrate that our analysis can be applied to explore the latent space of diverse neural network architectures, various layers within neural networks, transformations applied to input samples, and the distribution of training and testing data for neural networks. Thus, the k* distribution should aid in visualizing the structure inside neural networks and further foster their understanding. Project Website is available online at https://shashankkotyan.github.io/k-Distribution/.
Related papers
- Exploring the Manifold of Neural Networks Using Diffusion Geometry [7.038126249994092]
We learn manifold where datapoints are neural networks by introducing a distance between the hidden layer representations of the neural networks.
These distances are then fed to the non-linear dimensionality reduction algorithm PHATE to create a manifold of neural networks.
Our analysis reveals that high-performing networks cluster together in the manifold, displaying consistent embedding patterns.
arXiv Detail & Related papers (2024-11-19T16:34:45Z) - Linking in Style: Understanding learned features in deep learning models [0.0]
Convolutional neural networks (CNNs) learn abstract features to perform object classification.
We propose an automatic method to visualize and systematically analyze learned features in CNNs.
arXiv Detail & Related papers (2024-09-25T12:28:48Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - Variation Spaces for Multi-Output Neural Networks: Insights on Multi-Task Learning and Network Compression [28.851519959657466]
This paper introduces a novel theoretical framework for the analysis of vector-valued neural networks.
A key contribution of this work is the development of a representer theorem for the vector-valued variation spaces.
This observation reveals that the norm associated with these vector-valued variation spaces encourages the learning of features that are useful for multiple tasks.
arXiv Detail & Related papers (2023-05-25T23:32:10Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - Discretization Invariant Networks for Learning Maps between Neural
Fields [3.09125960098955]
We present a new framework for understanding and designing discretization invariant neural networks (DI-Nets)
Our analysis establishes upper bounds on the deviation in model outputs under different finite discretizations.
We prove by construction that DI-Nets universally approximate a large class of maps between integrable function spaces.
arXiv Detail & Related papers (2022-06-02T17:44:03Z) - Decomposing neural networks as mappings of correlation functions [57.52754806616669]
We study the mapping between probability distributions implemented by a deep feed-forward network.
We identify essential statistics in the data, as well as different information representations that can be used by neural networks.
arXiv Detail & Related papers (2022-02-10T09:30:31Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Deep Archimedean Copulas [98.96141706464425]
ACNet is a novel differentiable neural network architecture that enforces structural properties.
We show that ACNet is able to both approximate common Archimedean Copulas and generate new copulas which may provide better fits to data.
arXiv Detail & Related papers (2020-12-05T22:58:37Z) - Multi-Subspace Neural Network for Image Recognition [33.61205842747625]
In image classification task, feature extraction is always a big issue. Intra-class variability increases the difficulty in designing the extractors.
Recently, deep learning has drawn lots of attention on automatically learning features from data.
In this study, we proposed multi-subspace neural network (MSNN) which integrates key components of the convolutional neural network (CNN), receptive field, with subspace concept.
arXiv Detail & Related papers (2020-06-17T02:55:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.