A Local Optima Network Analysis of the Feedforward Neural Architecture
Space
- URL: http://arxiv.org/abs/2206.06903v1
- Date: Thu, 2 Jun 2022 08:09:17 GMT
- Title: A Local Optima Network Analysis of the Feedforward Neural Architecture
Space
- Authors: Isak Potgieter, Christopher W. Cleghorn and Anna S. Bosman
- Abstract summary: Local optima network (LON) analysis is a derivative of the fitness landscape of candidate solutions.
LONs may provide a viable paradigm by which to analyse and optimise neural architectures.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This study investigates the use of local optima network (LON) analysis, a
derivative of the fitness landscape of candidate solutions, to characterise and
visualise the neural architecture space. The search space of feedforward neural
network architectures with up to three layers, each with up to 10 neurons, is
fully enumerated by evaluating trained model performance on a selection of data
sets. Extracted LONs, while heterogeneous across data sets, all exhibit simple
global structures, with single global funnels in all cases but one. These
results yield early indication that LONs may provide a viable paradigm by which
to analyse and optimise neural architectures.
Related papers
- A General-Purpose Transferable Predictor for Neural Architecture Search [22.883809911265445]
We propose a general-purpose neural predictor for Neural Architecture Search (NAS) that can transfer across search spaces.
Experimental results on NAS-Bench-101, 201 and 301 demonstrate the efficacy of our scheme.
arXiv Detail & Related papers (2023-02-21T17:28:05Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Differentiable Neural Architecture Learning for Efficient Neural Network
Design [31.23038136038325]
We introduce a novel emph architecture parameterisation based on scaled sigmoid function.
We then propose a general emphiable Neural Architecture Learning (DNAL) method to optimize the neural architecture without the need to evaluate candidate neural networks.
arXiv Detail & Related papers (2021-03-03T02:03:08Z) - NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural
Network Synthesis [53.106414896248246]
We present a framework that allows analysts to effectively build the solution sub-graph space and guide the network search by injecting their domain knowledge.
Applying this technique in an iterative manner allows analysts to converge to the best performing neural network architecture for a given application.
arXiv Detail & Related papers (2020-09-28T01:48:45Z) - Exploiting Heterogeneity in Operational Neural Networks by Synaptic
Plasticity [87.32169414230822]
Recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs)
In this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the Synaptic Plasticity paradigm that poses the essential learning theory in biological neurons.
Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs.
arXiv Detail & Related papers (2020-08-21T19:03:23Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - DC-NAS: Divide-and-Conquer Neural Architecture Search [108.57785531758076]
We present a divide-and-conquer (DC) approach to effectively and efficiently search deep neural architectures.
We achieve a $75.1%$ top-1 accuracy on the ImageNet dataset, which is higher than that of state-of-the-art methods using the same search space.
arXiv Detail & Related papers (2020-05-29T09:02:16Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - A Neural Architecture Search based Framework for Liquid State Machine
Design [7.729541832738546]
Liquid State Machine (LSM) is a recurrent version of Spiking Neural Networks (SNN)
Recent works have demonstrated great potential for improving the accuracy of LSM model with low complexity.
Considering the diversity and uniqueness of brain structure, the design of LSM model should be explored in the largest search space possible.
arXiv Detail & Related papers (2020-04-07T10:55:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.