New methods for metastimuli: architecture, embeddings, and neural
network optimization
- URL: http://arxiv.org/abs/2102.07090v1
- Date: Sun, 14 Feb 2021 07:28:40 GMT
- Title: New methods for metastimuli: architecture, embeddings, and neural
network optimization
- Authors: Rico A.R. Picone, Dane Webb, Finbarr Obierefu, Jotham Lentz
- Abstract summary: Six significant new methodological developments of the previously-presented "metastimuli architecture" for human learning are presented.
These include architectural innovation, recurrent (RNN) artificial neural network (ANN) application, a variety of atom embedding techniques.
A technique for using the system for automatic atom categorization in a user's PIMS is outlined.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Six significant new methodological developments of the previously-presented
"metastimuli architecture" for human learning through machine learning of
spatially correlated structural position within a user's personal information
management system (PIMS), providing the basis for haptic metastimuli, are
presented. These include architectural innovation, recurrent (RNN) artificial
neural network (ANN) application, a variety of atom embedding techniques
(including a novel technique we call "nabla" embedding inspired by
linguistics), ANN hyper-parameter (one that affects the network but is not
trained, e.g. the learning rate) optimization, and meta-parameter (one that
determines the system performance but is not trained and not a hyper-parameter,
e.g. the atom embedding technique) optimization for exploring the large design
space. A technique for using the system for automatic atom categorization in a
user's PIMS is outlined. ANN training and hyper- and meta-parameter
optimization results are presented and discussed in service of methodological
recommendations.
Related papers
- Direct Training High-Performance Deep Spiking Neural Networks: A Review of Theories and Methods [33.377770671553336]
Spiking neural networks (SNNs) offer a promising energy-efficient alternative to artificial neural networks (ANNs)
In this paper, we provide a new perspective to summarize the theories and methods for training deep SNNs with high performance.
arXiv Detail & Related papers (2024-05-06T09:58:54Z) - Principled Architecture-aware Scaling of Hyperparameters [69.98414153320894]
Training a high-quality deep neural network requires choosing suitable hyperparameters, which is a non-trivial and expensive process.
In this work, we precisely characterize the dependence of initializations and maximal learning rates on the network architecture.
We demonstrate that network rankings can be easily changed by better training networks in benchmarks.
arXiv Detail & Related papers (2024-02-27T11:52:49Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - Reparameterization through Spatial Gradient Scaling [69.27487006953852]
Reparameterization aims to improve the generalization of deep neural networks by transforming convolutional layers into equivalent multi-branched structures during training.
We present a novel spatial gradient scaling method to redistribute learning focus among weights in convolutional networks.
arXiv Detail & Related papers (2023-03-05T17:57:33Z) - Learning Large-scale Neural Fields via Context Pruned Meta-Learning [60.93679437452872]
We introduce an efficient optimization-based meta-learning technique for large-scale neural field training.
We show how gradient re-scaling at meta-test time allows the learning of extremely high-quality neural fields.
Our framework is model-agnostic, intuitive, straightforward to implement, and shows significant reconstruction improvements for a wide range of signals.
arXiv Detail & Related papers (2023-02-01T17:32:16Z) - Re-visiting Reservoir Computing architectures optimized by Evolutionary
Algorithms [0.0]
Evolutionary Algorithms (EAs) have been applied to improve Neural Networks (NNs) architectures.
We provide a systematic brief survey about applications of EAs on the specific domain of the recurrent NNs named Reservoir Computing (RC)
arXiv Detail & Related papers (2022-11-11T14:50:54Z) - Towards Theoretically Inspired Neural Initialization Optimization [66.04735385415427]
We propose a differentiable quantity, named GradCosine, with theoretical insights to evaluate the initial state of a neural network.
We show that both the training and test performance of a network can be improved by maximizing GradCosine under norm constraint.
Generalized from the sample-wise analysis into the real batch setting, NIO is able to automatically look for a better initialization with negligible cost.
arXiv Detail & Related papers (2022-10-12T06:49:16Z) - Neural Architecture Search for Speech Emotion Recognition [72.1966266171951]
We propose to apply neural architecture search (NAS) techniques to automatically configure the SER models.
We show that NAS can improve SER performance (54.89% to 56.28%) while maintaining model parameter sizes.
arXiv Detail & Related papers (2022-03-31T10:16:10Z) - GradMax: Growing Neural Networks using Gradient Information [22.986063120002353]
We present a method that adds new neurons during training without impacting what is already learned, while improving the training dynamics.
We call this technique Gradient Maximizing Growth (GradMax) and demonstrate its effectiveness in variety of vision tasks and architectures.
arXiv Detail & Related papers (2022-01-13T18:30:18Z) - Differentiable Neural Architecture Learning for Efficient Neural Network
Design [31.23038136038325]
We introduce a novel emph architecture parameterisation based on scaled sigmoid function.
We then propose a general emphiable Neural Architecture Learning (DNAL) method to optimize the neural architecture without the need to evaluate candidate neural networks.
arXiv Detail & Related papers (2021-03-03T02:03:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.