Functional Neural Wavefunction Optimization
- URL: http://arxiv.org/abs/2507.10835v1
- Date: Mon, 14 Jul 2025 22:07:38 GMT
- Title: Functional Neural Wavefunction Optimization
- Authors: Victor Armegioiu, Juan Carrasquilla, Siddhartha Mishra, Johannes Müller, Jannes Nys, Marius Zeinhofer, Hang Zhang,
- Abstract summary: We propose a framework for the design and analysis of optimization algorithms in variational quantum Monte Carlo.<n>The framework translates infinite-dimensional optimization dynamics into tractable parameter-space algorithms.<n>We validate our framework with numerical experiments demonstrating its practical relevance.
- Score: 11.55213641895401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a framework for the design and analysis of optimization algorithms in variational quantum Monte Carlo, drawing on geometric insights into the corresponding function space. The framework translates infinite-dimensional optimization dynamics into tractable parameter-space algorithms through a Galerkin projection onto the tangent space of the variational ansatz. This perspective unifies existing methods such as stochastic reconfiguration and Rayleigh-Gauss-Newton, provides connections to classic function-space algorithms, and motivates the derivation of novel algorithms with geometrically principled hyperparameter choices. We validate our framework with numerical experiments demonstrating its practical relevance through the accurate estimation of ground-state energies for several prototypical models in condensed matter physics modeled with neural network wavefunctions.
Related papers
- Self-Supervised Coarsening of Unstructured Grid with Automatic Differentiation [55.88862563823878]
In this work, we present an original algorithm to coarsen an unstructured grid based on the concepts of differentiable physics.<n>We demonstrate performance of the algorithm on two PDEs: a linear equation which governs slightly compressible fluid flow in porous media and the wave equation.<n>Our results show that in the considered scenarios, we reduced the number of grid points up to 10 times while preserving the modeled variable dynamics in the points of interest.
arXiv Detail & Related papers (2025-07-24T11:02:13Z) - Learning Optical Flow Field via Neural Ordinary Differential Equation [44.16275288019991]
Recent works on optical flow estimation use neural networks to predict the flow field that maps positions of one image to positions of the other.<n>We introduce a novel approach for predicting the derivative of the flow using a continuous model, namely neural ordinary differential equations (ODE)
arXiv Detail & Related papers (2025-06-03T18:30:14Z) - KO: Kinetics-inspired Neural Optimizer with PDE Simulation Approaches [45.173398806932376]
This paper introduces KO, a novel neural gradient inspired by kinetic theory and partial differential equation (PDE) simulations.<n>We reimagine the dynamics of network parameters as the evolution of a particle system governed by kinetic principles.<n>This physics-driven approach inherently promotes parameter diversity during optimization, mitigating the phenomenon of parameter condensation.
arXiv Detail & Related papers (2025-05-20T18:00:01Z) - Geometry aware inference of steady state PDEs using Equivariant Neural Fields representations [0.0]
We introduce enf2enf, an encoder--decoder methodology for predicting steady-state Partial Differential Equations.<n>Our method supports real time inference and zero-shot super-resolution, enabling efficient training on low-resolution meshes.
arXiv Detail & Related papers (2025-04-24T08:30:32Z) - Neural Network Approach to Stochastic Dynamics for Smooth Multimodal Density Estimation [0.0]
We extent Metropolis-Adjusted Langevin Diffusion algorithm by modelling the Eigenity of precondition matrix as a random matrix.<n>The proposed method provides fully adaptation mechanisms to tune proposal densities to exploits and adapts the geometry of local structures of statistical models.
arXiv Detail & Related papers (2025-03-22T16:17:12Z) - Application of Langevin Dynamics to Advance the Quantum Natural Gradient Optimization Algorithm [47.47843839099175]
A Quantum Natural Gradient (QNG) algorithm for optimization of variational quantum circuits has been proposed recently.<n>Momentum-QNG is more effective to escape local minima and plateaus in the variational parameter space.
arXiv Detail & Related papers (2024-09-03T15:21:16Z) - A hybrid numerical methodology coupling Reduced Order Modeling and Graph Neural Networks for non-parametric geometries: applications to structural dynamics problems [0.0]
This work introduces a new approach for accelerating the numerical analysis of time-domain partial differential equations (PDEs) governing complex physical systems.
The methodology is based on a combination of a classical reduced-order modeling (ROM) framework and recently-parametric Graph Neural Networks (GNNs)
arXiv Detail & Related papers (2024-06-03T08:51:25Z) - Hallmarks of Optimization Trajectories in Neural Networks: Directional Exploration and Redundancy [75.15685966213832]
We analyze the rich directional structure of optimization trajectories represented by their pointwise parameters.
We show that training only scalar batchnorm parameters some while into training matches the performance of training the entire network.
arXiv Detail & Related papers (2024-03-12T07:32:47Z) - Momentum Particle Maximum Likelihood [2.4561590439700076]
We propose an analogous dynamical-systems-inspired approach to minimizing the free energy functional.
By discretizing the system, we obtain a practical algorithm for Maximum likelihood estimation in latent variable models.
The algorithm outperforms existing particle methods in numerical experiments and compares favourably with other MLE algorithms.
arXiv Detail & Related papers (2023-12-12T14:53:18Z) - Neural Characteristic Activation Analysis and Geometric Parameterization for ReLU Networks [2.2713084727838115]
We introduce a novel approach for analyzing the training dynamics of ReLU networks by examining the characteristic activation boundaries of individual neurons.
Our proposed analysis reveals a critical instability in common neural network parameterizations and normalizations during convergence optimization, which impedes fast convergence and hurts performance.
arXiv Detail & Related papers (2023-05-25T10:19:13Z) - Counting Phases and Faces Using Bayesian Thermodynamic Integration [77.34726150561087]
We introduce a new approach to reconstruction of the thermodynamic functions and phase boundaries in two-parametric statistical mechanics systems.
We use the proposed approach to accurately reconstruct the partition functions and phase diagrams of the Ising model and the exactly solvable non-equilibrium TASEP.
arXiv Detail & Related papers (2022-05-18T17:11:23Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Fixed Depth Hamiltonian Simulation via Cartan Decomposition [59.20417091220753]
We present a constructive algorithm for generating quantum circuits with time-independent depth.
We highlight our algorithm for special classes of models, including Anderson localization in one dimensional transverse field XY model.
In addition to providing exact circuits for a broad set of spin and fermionic models, our algorithm provides broad analytic and numerical insight into optimal Hamiltonian simulations.
arXiv Detail & Related papers (2021-04-01T19:06:00Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z) - Non-linear reduced modeling of dynamical systems using kernel methods and low-rank approximation [5.935306543481018]
We propose a new efficient algorithm for data-driven reduced modeling of non-linear dynamics based on linear approximations in a kernel Hilbert space.<n>This algorithm takes advantage of the closed-form solution of a low-rank constraint optimization problem while exploiting advantageously kernel-based computations.
arXiv Detail & Related papers (2017-10-30T13:06:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.