Capacity Studies for a Differential Growing Neural Gas
- URL: http://arxiv.org/abs/2212.12319v1
- Date: Fri, 23 Dec 2022 13:19:48 GMT
- Title: Capacity Studies for a Differential Growing Neural Gas
- Authors: P. Levi, P. Gelhausen, G. Peters
- Abstract summary: This study evaluates the capacity of a two layered DGNG grid cell model on the Fashion-MNIST dataset.
It is concluded that the DGNG model is able to obtain a meaningful and plausible representation of the input space.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In 2019 Kerdels and Peters proposed a grid cell model (GCM) based on a
Differential Growing Neural Gas (DGNG) network architecture as a
computationally efficient way to model an Autoassociative Memory Cell (AMC)
\cite{Kerdels_Peters_2019}. An important feature of the DGNG architecture with
respect to possible applications in the field of computational neuroscience is
its \textit{capacity} refering to its capability to process and uniquely
distinguish input signals and therefore obtain a valid representation of the
input space. This study evaluates the capacity of a two layered DGNG grid cell
model on the Fashion-MNIST dataset. The focus on the study lies on the
variation of layer sizes to improve the understanding of capacity properties in
relation to network parameters as well as its scaling properties. Additionally,
parameter discussions and a plausability check with a pixel/segment variation
method are provided. It is concluded, that the DGNG model is able to obtain a
meaningful and plausible representation of the input space and to cope with the
complexity of the Fashion-MNIST dataset even at moderate layer sizes.
Related papers
- Instruction-Guided Autoregressive Neural Network Parameter Generation [49.800239140036496]
We propose IGPG, an autoregressive framework that unifies parameter synthesis across diverse tasks and architectures.
By autoregressively generating neural network weights' tokens, IGPG ensures inter-layer coherence and enables efficient adaptation across models and datasets.
Experiments on multiple datasets demonstrate that IGPG consolidates diverse pretrained models into a single, flexible generative framework.
arXiv Detail & Related papers (2025-04-02T05:50:19Z) - Rethinking Graph Transformer Architecture Design for Node Classification [4.497245600377944]
Graph Transformer (GT) is a special type of Graph Neural Networks (GNNs) that utilize multi-head attention to facilitate high-order message passing.
In this work, we conduct observational experiments to explore the adaptability of the GT architecture in node classification tasks.
Our proposed GT architecture can effectively adapt to node classification tasks without being affected by global noise and computational efficiency limitations.
arXiv Detail & Related papers (2024-10-15T02:08:16Z) - Solution space and storage capacity of fully connected two-layer neural networks with generic activation functions [0.552480439325792]
storage capacity of a binary classification model is the maximum number of random input-output pairs per parameter that the model can learn.
We analyze the structure of the solution space and the storage capacity of fully connected two-layer neural networks with general activation functions.
arXiv Detail & Related papers (2024-04-20T15:12:47Z) - Interpretable A-posteriori Error Indication for Graph Neural Network Surrogate Models [0.0]
This work introduces an interpretability enhancement procedure for graph neural networks (GNNs)
The end result is an interpretable GNN model that isolates regions in physical space, corresponding to sub-graphs, that are intrinsically linked to the forecasting task.
The interpretable GNNs can also be used to identify, during inference, graph nodes that correspond to a majority of the anticipated forecasting error.
arXiv Detail & Related papers (2023-11-13T18:37:07Z) - MuseGNN: Interpretable and Convergent Graph Neural Network Layers at
Scale [15.93424606182961]
We propose a sampling-based energy function and scalable GNN layers that iteratively reduce it, guided by convergence guarantees in certain settings.
We also instantiate a full GNN architecture based on these designs, and the model achieves competitive accuracy and scalability when applied to the largest publicly-available node classification benchmark exceeding 1TB in size.
arXiv Detail & Related papers (2023-10-19T04:30:14Z) - T1: Scaling Diffusion Probabilistic Fields to High-Resolution on Unified
Visual Modalities [69.16656086708291]
Diffusion Probabilistic Field (DPF) models the distribution of continuous functions defined over metric spaces.
We propose a new model comprising of a view-wise sampling algorithm to focus on local structure learning.
The model can be scaled to generate high-resolution data while unifying multiple modalities.
arXiv Detail & Related papers (2023-05-24T03:32:03Z) - A new perspective on probabilistic image modeling [92.89846887298852]
We present a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference.
DCGMMs can be trained end-to-end by SGD from random initial conditions, much like CNNs.
We show that DCGMMs compare favorably to several recent PC and SPN models in terms of inference, classification and sampling.
arXiv Detail & Related papers (2022-03-21T14:53:57Z) - Lightweight, Dynamic Graph Convolutional Networks for AMR-to-Text
Generation [56.73834525802723]
Lightweight Dynamic Graph Convolutional Networks (LDGCNs) are proposed.
LDGCNs capture richer non-local interactions by synthesizing higher order information from the input graphs.
We develop two novel parameter saving strategies based on the group graph convolutions and weight tied convolutions to reduce memory usage and model complexity.
arXiv Detail & Related papers (2020-10-09T06:03:46Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z) - Generalising Recursive Neural Models by Tensor Decomposition [12.069862650316262]
We introduce a general approach to model aggregation of structural context leveraging a tensor-based formulation.
We show how the exponential growth in the size of the parameter space can be controlled through an approximation based on the Tucker decomposition.
By this means, we can effectively regulate the trade-off between expressivity of the encoding, controlled by the hidden size, computational complexity and model generalisation.
arXiv Detail & Related papers (2020-06-17T17:28:19Z) - Deep Autoencoding Topic Model with Scalable Hybrid Bayesian Inference [55.35176938713946]
We develop deep autoencoding topic model (DATM) that uses a hierarchy of gamma distributions to construct its multi-stochastic-layer generative network.
We propose a Weibull upward-downward variational encoder that deterministically propagates information upward via a deep neural network, followed by a downward generative model.
The efficacy and scalability of our models are demonstrated on both unsupervised and supervised learning tasks on big corpora.
arXiv Detail & Related papers (2020-06-15T22:22:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.