Beyond Multilayer Perceptrons: Investigating Complex Topologies in
Neural Networks
- URL: http://arxiv.org/abs/2303.17925v2
- Date: Mon, 23 Oct 2023 09:27:40 GMT
- Title: Beyond Multilayer Perceptrons: Investigating Complex Topologies in
Neural Networks
- Authors: Tommaso Boccato, Matteo Ferrante, Andrea Duggento, Nicola Toschi
- Abstract summary: We explore the impact of network topology on the approximation capabilities of artificial neural networks (ANNs)
We propose a novel methodology for constructing complex ANNs based on various topologies, including Barab'asi-Albert, ErdHos-R'enyi, Watts-Strogatz, and multilayer perceptrons (MLPs)
The constructed networks are evaluated on synthetic datasets generated from manifold learning generators, with varying levels of task difficulty and noise, and on real-world datasets from UCI.
- Score: 0.12289361708127873
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this study, we explore the impact of network topology on the approximation
capabilities of artificial neural networks (ANNs), with a particular focus on
complex topologies. We propose a novel methodology for constructing complex
ANNs based on various topologies, including Barab\'asi-Albert,
Erd\H{o}s-R\'enyi, Watts-Strogatz, and multilayer perceptrons (MLPs). The
constructed networks are evaluated on synthetic datasets generated from
manifold learning generators, with varying levels of task difficulty and noise,
and on real-world datasets from the UCI suite. Our findings reveal that complex
topologies lead to superior performance in high-difficulty regimes compared to
traditional MLPs. This performance advantage is attributed to the ability of
complex networks to exploit the compositionality of the underlying target
function. However, this benefit comes at the cost of increased forward-pass
computation time and reduced robustness to graph damage. Additionally, we
investigate the relationship between various topological attributes and model
performance. Our analysis shows that no single attribute can account for the
observed performance differences, suggesting that the influence of network
topology on approximation capabilities may be more intricate than a simple
correlation with individual topological attributes. Our study sheds light on
the potential of complex topologies for enhancing the performance of ANNs and
provides a foundation for future research exploring the interplay between
multiple topological attributes and their impact on model performance.
Related papers
- Generalized Factor Neural Network Model for High-dimensional Regression [50.554377879576066]
We tackle the challenges of modeling high-dimensional data sets with latent low-dimensional structures hidden within complex, non-linear, and noisy relationships.
Our approach enables a seamless integration of concepts from non-parametric regression, factor models, and neural networks for high-dimensional regression.
arXiv Detail & Related papers (2025-02-16T23:13:55Z) - Enhancing Non-Intrusive Load Monitoring with Features Extracted by Independent Component Analysis [0.0]
A novel neural network architecture is proposed to address the challenges in energy disaggregation algorithms.
Our results demonstrate that the model is less prone to overfitting, exhibits low complexity, and effectively decomposes signals with many individual components.
arXiv Detail & Related papers (2025-01-28T09:45:06Z) - Hyperbolic Benchmarking Unveils Network Topology-Feature Relationship in GNN Performance [0.5416466085090772]
We introduce a comprehensive benchmarking framework for graph machine learning.
We generate synthetic networks with realistic topological properties and node feature vectors.
Results highlight the dependency of model performance on the interplay between network structure and node features.
arXiv Detail & Related papers (2024-06-04T20:40:06Z) - Defining Neural Network Architecture through Polytope Structures of Dataset [53.512432492636236]
This paper defines upper and lower bounds for neural network widths, which are informed by the polytope structure of the dataset in question.
We develop an algorithm to investigate a converse situation where the polytope structure of a dataset can be inferred from its corresponding trained neural networks.
It is established that popular datasets such as MNIST, Fashion-MNIST, and CIFAR10 can be efficiently encapsulated using no more than two polytopes with a small number of faces.
arXiv Detail & Related papers (2024-02-04T08:57:42Z) - Homological Neural Networks: A Sparse Architecture for Multivariate
Complexity [0.0]
We develop a novel deep neural network unit characterized by a sparse higher-order graphical architecture built over the homological structure of underlying data.
Results demonstrate the advantages of this novel design which can tie or overcome the results of state-of-the-art machine learning and deep learning models using only a fraction of parameters.
arXiv Detail & Related papers (2023-06-27T09:46:16Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Activation Landscapes as a Topological Summary of Neural Network
Performance [0.0]
We study how data transforms as it passes through successive layers of a deep neural network (DNN)
We compute the persistent homology of the activation data for each layer of the network and summarize this information using persistence landscapes.
The resulting feature map provides both an informative visual- ization of the network and a kernel for statistical analysis and machine learning.
arXiv Detail & Related papers (2021-10-19T17:45:36Z) - Soft Hierarchical Graph Recurrent Networks for Many-Agent Partially
Observable Environments [9.067091068256747]
We propose a novel network structure called hierarchical graph recurrent network(HGRN) for multi-agent cooperation under partial observability.
Based on the above technologies, we proposed a value-based MADRL algorithm called Soft-HGRN and its actor-critic variant named SAC-HRGN.
arXiv Detail & Related papers (2021-09-05T09:51:25Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Inter-layer Information Similarity Assessment of Deep Neural Networks
Via Topological Similarity and Persistence Analysis of Data Neighbour
Dynamics [93.4221402881609]
The quantitative analysis of information structure through a deep neural network (DNN) can unveil new insights into the theoretical performance of DNN architectures.
Inspired by both LS and ID strategies for quantitative information structure analysis, we introduce two novel complimentary methods for inter-layer information similarity assessment.
We demonstrate their efficacy in this study by performing analysis on a deep convolutional neural network architecture on image data.
arXiv Detail & Related papers (2020-12-07T15:34:58Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.