From particle swarm optimization to consensus based optimization:
stochastic modeling and mean-field limit
- URL: http://arxiv.org/abs/2012.05613v1
- Date: Thu, 10 Dec 2020 11:58:19 GMT
- Title: From particle swarm optimization to consensus based optimization:
stochastic modeling and mean-field limit
- Authors: Sara Grassi, Lorenzo Pareschi
- Abstract summary: We consider a continuous description based on differential equations of the PSO process for solving global optimization problems.
We derive in the large particle limit the corresponding mean-field approximation based on Vlasov-Fokker-Planck-type equations.
We compute the related macroscopic hydrodynamic equations that clarify the link with the recently introduced consensus based optimization methods.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we consider a continuous description based on stochastic
differential equations of the popular particle swarm optimization (PSO) process
for solving global optimization problems and derive in the large particle limit
the corresponding mean-field approximation based on Vlasov-Fokker-Planck-type
equations. The disadvantage of memory effects induced by the need to store the
local best position is overcome by the introduction of an additional
differential equation describing the evolution of the local best. A
regularization process for the global best permits to formally derive the
respective mean-field description. Subsequently, in the small inertia limit, we
compute the related macroscopic hydrodynamic equations that clarify the link
with the recently introduced consensus based optimization (CBO) methods.
Several numerical examples illustrate the mean field process, the small inertia
limit and the potential of this general class of global optimization methods.
Related papers
- Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Optimization on manifolds: A symplectic approach [127.54402681305629]
We propose a dissipative extension of Dirac's theory of constrained Hamiltonian systems as a general framework for solving optimization problems.
Our class of (accelerated) algorithms are not only simple and efficient but also applicable to a broad range of contexts.
arXiv Detail & Related papers (2021-07-23T13:43:34Z) - A Contraction Theory Approach to Optimization Algorithms from
Acceleration Flows [1.90365714903665]
We use contraction theory to provide a principled methodology to design and discretize appropriate ODEs.
We propose a novel system of ODEs, namely the Accelerated-Contracting-Nesterov flow.
Remarkably, a simple explicit Euler discretization of this flow corresponds to the Nesterov acceleration method.
arXiv Detail & Related papers (2021-05-18T21:11:37Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Direct Optimal Control Approach to Laser-Driven Quantum Particle
Dynamics [77.34726150561087]
We propose direct optimal control as a robust and flexible alternative to indirect control theory.
The method is illustrated for the case of laser-driven wavepacket dynamics in a bistable potential.
arXiv Detail & Related papers (2020-10-08T07:59:29Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - Consensus-Based Optimization on Hypersurfaces: Well-Posedness and
Mean-Field Limit [7.998311072988401]
We introduce a new differential model for global optimization of non-field functions on compact hypersurfaces.
In particular, as soon as the consensus is reached, then the consensus vanishes.
arXiv Detail & Related papers (2020-01-31T18:33:08Z) - Consensus-Based Optimization on the Sphere: Convergence to Global
Minimizers and Machine Learning [7.998311072988401]
We investigate the implementation of a new Kuramoto-Vicsek-type model for global optimization of non functions on the sphere.
We present several numerical experiments, which show that the algorithm proposed in the present paper scales well with the dimension and is extremely versatile.
arXiv Detail & Related papers (2020-01-31T18:23:26Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.