Optimization of Discrete Parameters Using the Adaptive Gradient Method
and Directed Evolution
- URL: http://arxiv.org/abs/2401.06834v1
- Date: Fri, 12 Jan 2024 15:45:56 GMT
- Title: Optimization of Discrete Parameters Using the Adaptive Gradient Method
and Directed Evolution
- Authors: Andrei Beinarovich, Sergey Stepanov, Alexander Zaslavsky
- Abstract summary: The search for an optimal solution is carried out by a population of individuals.
Unadapted individuals die, and optimal ones interbreed, the result directed evolutionary dynamics.
- Score: 49.1574468325115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The problem is considered of optimizing discrete parameters in the presence
of constraints. We use the stochastic sigmoid with temperature and put forward
the new adaptive gradient method CONGA. The search for an optimal solution is
carried out by a population of individuals. Each of them varies according to
gradients of the 'environment' and is characterized by two temperature
parameters with different annealing schedules. Unadapted individuals die, and
optimal ones interbreed, the result is directed evolutionary dynamics. The
proposed method is illustrated using the well-known combinatorial problem for
optimal packing of a backpack (0-1 KP).
Related papers
- Adaptive Preference Scaling for Reinforcement Learning with Human Feedback [103.36048042664768]
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values.
We propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO)
Our method is versatile and can be readily adapted to various preference optimization frameworks.
arXiv Detail & Related papers (2024-06-04T20:33:22Z) - Stochastic Hessian Fittings with Lie Groups [6.626539885456148]
Hessian fitting as an optimization problem is strongly convex under mild conditions with a specific yet general enough Lie group.
This discovery turns Hessian fitting into a well behaved optimization problem, and facilitates the designs of highly efficient and elegant Lie group sparse preconditioner fitting methods for large scale optimizations.
arXiv Detail & Related papers (2024-02-19T06:00:35Z) - Distributed Evolution Strategies for Black-box Stochastic Optimization [42.90600124972943]
This work concerns the evolutionary approaches to distributed black-box optimization.
Each worker can individually solve an approximation of the problem with algorithms.
We propose two alternative simulation schemes which significantly improve robustness of problems.
arXiv Detail & Related papers (2022-04-09T11:18:41Z) - Last-Iterate Convergence of Saddle-Point Optimizers via High-Resolution
Differential Equations [83.3201889218775]
Several widely-used first-order saddle-point optimization methods yield an identical continuous-time ordinary differential equation (ODE) when derived naively.
However, the convergence properties of these methods are qualitatively different, even on simple bilinear games.
We adopt a framework studied in fluid dynamics to design differential equation models for several saddle-point optimization methods.
arXiv Detail & Related papers (2021-12-27T18:31:34Z) - Learning adaptive differential evolution algorithm from optimization
experiences by policy gradient [24.2122434523704]
This paper proposes a novel adaptive parameter control approach based on learning from the optimization experiences over a set of problems.
A reinforcement learning algorithm, named policy, is applied to learn an agent that can provide the control parameters of a proposed differential evolution adaptively.
The proposed algorithm performs competitively against nine well-known evolutionary algorithms on the CEC'13 and CEC'17 test suites.
arXiv Detail & Related papers (2021-02-06T12:01:20Z) - Stochastic Learning Approach to Binary Optimization for Optimal Design
of Experiments [0.0]
We present a novel approach to binary optimization for optimal experimental design (OED) for Bayesian inverse problems governed by mathematical models such as partial differential equations.
The OED utility function, namely, the regularized optimality gradient, is cast into an objective function in the form of an expectation over a Bernoulli distribution.
The objective is then solved by using a probabilistic optimization routine to find an optimal observational policy.
arXiv Detail & Related papers (2021-01-15T03:54:12Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z) - Particle Swarm Optimization with Velocity Restriction and Evolutionary
Parameters Selection for Scheduling Problem [0.0]
The article presents a study of the Particle Swarm optimization method for scheduling problem.
To improve the method's performance a restriction of particles' velocity and an evolutionary meta-optimization were realized.
arXiv Detail & Related papers (2020-06-19T02:28:57Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - On the Convergence of Adaptive Gradient Methods for Nonconvex Optimization [80.03647903934723]
We prove adaptive gradient methods in expectation of gradient convergence methods.
Our analyses shed light on better adaptive gradient methods in optimizing non understanding gradient bounds.
arXiv Detail & Related papers (2018-08-16T20:25:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.