Position-wise optimizer: A nature-inspired optimization algorithm
- URL: http://arxiv.org/abs/2204.05312v1
- Date: Mon, 11 Apr 2022 15:30:52 GMT
- Title: Position-wise optimizer: A nature-inspired optimization algorithm
- Authors: Amir Valizadeh
- Abstract summary: A novel nature-inspired optimization algorithm is introduced that imitates biological neural plasticity.
The model is tested on three datasets and the results are compared with gradient descent optimization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The human nervous system utilizes synaptic plasticity to solve optimization
problems. Previous studies have tried to add the plasticity factor to the
training process of artificial neural networks, but most of those models
require complex external control over the network or complex novel rules. In
this manuscript, a novel nature-inspired optimization algorithm is introduced
that imitates biological neural plasticity. Furthermore, the model is tested on
three datasets and the results are compared with gradient descent optimization.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Gradual Optimization Learning for Conformational Energy Minimization [69.36925478047682]
Gradual Optimization Learning Framework (GOLF) for energy minimization with neural networks significantly reduces the required additional data.
Our results demonstrate that the neural network trained with GOLF performs on par with the oracle on a benchmark of diverse drug-like molecules.
arXiv Detail & Related papers (2023-11-05T11:48:08Z) - On the Trade-off Between Efficiency and Precision of Neural Abstraction [62.046646433536104]
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
arXiv Detail & Related papers (2023-07-28T13:22:32Z) - Optimizing Neural Networks through Activation Function Discovery and
Automatic Weight Initialization [0.5076419064097734]
dissertation introduces techniques for discovering more powerful activation functions.
It provides new perspectives on neural network optimization.
dissertation thus makes concrete progress towards fully automatic machine learning in the future.
arXiv Detail & Related papers (2023-04-06T21:01:00Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Physics Informed Piecewise Linear Neural Networks for Process
Optimization [0.0]
It is proposed to upgrade piece-wise linear neural network models with physics informed knowledge for optimization problems with neural network models embedded.
For all cases, physics-informed trained neural network based optimal results are closer to global optimality.
arXiv Detail & Related papers (2023-02-02T10:14:54Z) - Towards Theoretically Inspired Neural Initialization Optimization [66.04735385415427]
We propose a differentiable quantity, named GradCosine, with theoretical insights to evaluate the initial state of a neural network.
We show that both the training and test performance of a network can be improved by maximizing GradCosine under norm constraint.
Generalized from the sample-wise analysis into the real batch setting, NIO is able to automatically look for a better initialization with negligible cost.
arXiv Detail & Related papers (2022-10-12T06:49:16Z) - Neural Combinatorial Optimization: a New Player in the Field [69.23334811890919]
This paper presents a critical analysis on the incorporation of algorithms based on neural networks into the classical optimization framework.
A comprehensive study is carried out to analyse the fundamental aspects of such algorithms, including performance, transferability, computational cost and to larger-sized instances.
arXiv Detail & Related papers (2022-05-03T07:54:56Z) - Half-Inverse Gradients for Physical Deep Learning [25.013244956897832]
Integrating differentiable physics simulators into the training process can greatly improve the quality of results.
The gradient-based solvers have a profound effect on the gradient flow as manipulating scales in magnitude and direction is an inherent property of many physical processes.
In this work, we analyze the characteristics of both physical and neural network optimizations to derive a new method that does not suffer from this phenomenon.
arXiv Detail & Related papers (2022-03-18T19:11:04Z) - Acceleration techniques for optimization over trained neural network
ensembles [1.0323063834827415]
We study optimization problems where the objective function is modeled through feedforward neural networks with rectified linear unit activation.
We present a mixed-integer linear program based on existing popular big-$M$ formulations for optimizing over a single neural network.
arXiv Detail & Related papers (2021-12-13T20:50:54Z) - Iterative Surrogate Model Optimization (ISMO): An active learning
algorithm for PDE constrained optimization with deep neural networks [14.380314061763508]
We present a novel active learning algorithm, termed as iterative surrogate model optimization (ISMO)
This algorithm is based on deep neural networks and its key feature is the iterative selection of training data through a feedback loop between deep neural networks and any underlying standard optimization algorithm.
arXiv Detail & Related papers (2020-08-13T07:31:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.