Hamiltonian Neural Networks approach to fuzzball geodesics
- URL: http://arxiv.org/abs/2502.20881v2
- Date: Thu, 20 Mar 2025 17:54:41 GMT
- Title: Hamiltonian Neural Networks approach to fuzzball geodesics
- Authors: Andrea Cipriani, Alessandro De Santis, Giorgio Di Russo, Alfredo Grillo, Luca Tabarroni,
- Abstract summary: Hamiltonian Neural Networks (HNNs) are tools that minimize a loss function to solve Hamilton equations of motion.<n>In this work, we implement several HNNs trained to solve, with high accuracy, the Hamilton equations for a massless probe moving inside a smooth and horizonless geometry known as D1-D5 circular fuzzball.
- Score: 39.58317527488534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent increase in computational resources and data availability has led to a significant rise in the use of Machine Learning (ML) techniques for data analysis in physics. However, the application of ML methods to solve differential equations capable of describing even complex physical systems is not yet fully widespread in theoretical high-energy physics. Hamiltonian Neural Networks (HNNs) are tools that minimize a loss function defined to solve Hamilton equations of motion. In this work, we implement several HNNs trained to solve, with high accuracy, the Hamilton equations for a massless probe moving inside a smooth and horizonless geometry known as D1-D5 circular fuzzball. We study both planar (equatorial) and non-planar geodesics in different regimes according to the impact parameter, some of which are unstable. Our findings suggest that HNNs could eventually replace standard numerical integrators, as they are equally accurate but more reliable in critical situations.
Related papers
- Quantum algorithm for solving nonlinear differential equations based on physics-informed effective Hamiltonians [14.379311972506791]
We propose a distinct approach to solving differential equations on quantum computers by encoding the problem into ground states of effective Hamiltonian operators.
Our algorithm relies on constructing such operators in the Chebyshev space, where an effective Hamiltonian is a sum of global differential and data constraints.
arXiv Detail & Related papers (2025-04-17T17:59:33Z) - Enabling Automatic Differentiation with Mollified Graph Neural Operators [75.3183193262225]
We propose the mollified graph neural operator (mGNO), the first method to leverage automatic differentiation and compute emphexact gradients on arbitrary geometries.
For a PDE example on regular grids, mGNO paired with autograd reduced the L2 relative data error by 20x compared to finite differences.
It can also solve PDEs on unstructured point clouds seamlessly, using physics losses only, at resolutions vastly lower than those needed for finite differences to be accurate enough.
arXiv Detail & Related papers (2025-04-11T06:16:30Z) - Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Learning to Solve PDE-constrained Inverse Problems with Graph Networks [51.89325993156204]
In many application domains across science and engineering, we are interested in solving inverse problems with constraints defined by a partial differential equation (PDE)
Here we explore GNNs to solve such PDE-constrained inverse problems.
We demonstrate computational speedups of up to 90x using GNNs compared to principled solvers.
arXiv Detail & Related papers (2022-06-01T18:48:01Z) - Physics-informed neural networks for solving parametric magnetostatic
problems [0.45119235878273]
This paper investigates the ability of physics-informed neural networks to learn the magnetic field response as a function of design parameters.
We use a deep neural network (DNN) to represent the magnetic field as a function of space and a total of ten parameters.
arXiv Detail & Related papers (2022-02-08T18:12:26Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Symplectic Learning for Hamiltonian Neural Networks [0.0]
Hamiltonian Neural Networks (HNNs) took a first step towards a unified "gray box" approach.
We exploit the symplectic structure of Hamiltonian systems with a different loss function.
We mathematically guarantee the existence of an exact Hamiltonian function which the HNN can learn.
arXiv Detail & Related papers (2021-06-22T13:33:12Z) - Sparse Symplectically Integrated Neural Networks [15.191984347149667]
We introduce Sparselectically Integrated Neural Networks (SSINNs)
SSINNs are a novel model for learning Hamiltonian dynamical systems from data.
We evaluate SSINNs on four classical Hamiltonian dynamical problems.
arXiv Detail & Related papers (2020-06-10T03:33:37Z) - Hamiltonian neural networks for solving equations of motion [3.1498833540989413]
We present a Hamiltonian neural network that solves the differential equations that govern dynamical systems.
A symplectic Euler integrator requires two orders more evaluation points than the Hamiltonian network in order to achieve the same order of the numerical error.
arXiv Detail & Related papers (2020-01-29T21:48:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.