A Swarm Variant for the Schr\"odinger Solver
- URL: http://arxiv.org/abs/2104.04795v1
- Date: Sat, 10 Apr 2021 15:51:36 GMT
- Title: A Swarm Variant for the Schr\"odinger Solver
- Authors: Urvil Nileshbhai Jivani, Omatharv Bharat Vaidya, Anwesh Bhattacharya,
Snehanshu Saha
- Abstract summary: This paper introduces application of the Exponentially Averaged Momentum Particle Swarm Optimization (EM-PSO) as a derivative-free derivative for Neural Networks.
It adopts PSO's major advantages such as search space exploration and higher robustness endowed to local minima compared to gradient-descents such as Adam.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces application of the Exponentially Averaged Momentum
Particle Swarm Optimization (EM-PSO) as a derivative-free optimizer for Neural
Networks. It adopts PSO's major advantages such as search space exploration and
higher robustness to local minima compared to gradient-descent optimizers such
as Adam. Neural network based solvers endowed with gradient optimization are
now being used to approximate solutions to Differential Equations. Here, we
demonstrate the novelty of EM-PSO in approximating gradients and leveraging the
property in solving the Schr\"odinger equation, for the Particle-in-a-Box
problem. We also provide the optimal set of hyper-parameters supported by
mathematical proofs, suited for our algorithm.
Related papers
- A Simulation-Free Deep Learning Approach to Stochastic Optimal Control [12.699529713351287]
We propose a simulation-free algorithm for the solution of generic problems in optimal control (SOC)
Unlike existing methods, our approach does not require the solution of an adjoint problem.
arXiv Detail & Related papers (2024-10-07T16:16:53Z) - Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - RoPINN: Region Optimized Physics-Informed Neural Networks [66.38369833561039]
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
arXiv Detail & Related papers (2024-05-23T09:45:57Z) - Dynamic Anisotropic Smoothing for Noisy Derivative-Free Optimization [0.0]
We propose a novel algorithm that extends the methods of ball smoothing and Gaussian smoothing for noisy derivative-free optimization.
The algorithm dynamically adapts the shape of the smoothing kernel to approximate the Hessian of the objective function around a local optimum.
arXiv Detail & Related papers (2024-05-02T21:04:20Z) - SDEs for Minimax Optimization [11.290653315174382]
In this paper, we pioneer the use of differential equations (SDEs) to analyze and compare Minimax convergences.
Our SDE models for Gradient Descent-Ascent, Extragradient, and Hamiltonian Gradient Descent are provable approximations of their algorithmic counterparts.
This perspective also allows for a unified and simplified analysis strategy based on the principles of Ito calculus.
arXiv Detail & Related papers (2024-02-19T20:18:29Z) - Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with
Variance Reduction and its Application to Optimization [50.83356836818667]
gradient Langevin Dynamics is one of the most fundamental algorithms to solve non-eps optimization problems.
In this paper, we show two variants of this kind, namely the Variance Reduced Langevin Dynamics and the Recursive Gradient Langevin Dynamics.
arXiv Detail & Related papers (2022-03-30T11:39:00Z) - Machine Learning For Elliptic PDEs: Fast Rate Generalization Bound,
Neural Scaling Law and Minimax Optimality [11.508011337440646]
We study the statistical limits of deep learning techniques for solving elliptic partial differential equations (PDEs) from random samples.
To simplify the problem, we focus on a prototype elliptic PDE: the Schr"odinger equation on a hypercube with zero Dirichlet boundary condition.
We establish upper and lower bounds for both methods, which improves upon concurrently developed upper bounds for this problem.
arXiv Detail & Related papers (2021-10-13T17:26:31Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - AdaSwarm: Augmenting Gradient-Based optimizers in Deep Learning with
Swarm Intelligence [19.573380763700715]
This paper introduces AdaS, a gradient-free Mathematical Mathematical which has similar or even better performance than the Adamwarm adopted in neural networks.
We show that, the gradient of any function, differentiable or not, can be approximated by using the parameters of EMPSO.
We also show that AdaS is able to handle a variety of loss proofs during backpropagation, including the maximum absolute error (MAE)
arXiv Detail & Related papers (2020-05-19T19:17:38Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.