Learning Kinetic Monte Carlo stochastic dynamics with Deep Generative Adversarial Networks
- URL: http://arxiv.org/abs/2507.21763v1
- Date: Tue, 29 Jul 2025 12:48:03 GMT
- Title: Learning Kinetic Monte Carlo stochastic dynamics with Deep Generative Adversarial Networks
- Authors: Daniele Lanzoni, Olivier Pierre-Louis, Roberto Bergamaschini, Francesco Montalenti,
- Abstract summary: We show that Generative Adversarial Networks (GANs) may be fruitfully exploited to learn dynamics.<n>We showcase the application to a two-dimensional, many-particle system, focusing on surface-step fluctuations and on the related time-dependent roughness.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We show that Generative Adversarial Networks (GANs) may be fruitfully exploited to learn stochastic dynamics, surrogating traditional models while capturing thermal fluctuations. Specifically, we showcase the application to a two-dimensional, many-particle system, focusing on surface-step fluctuations and on the related time-dependent roughness. After the construction of a dataset based on Kinetic Monte Carlo simulations, a conditional GAN is trained to propagate stochastically the state of the system in time, allowing the generation of new sequences with a reduced computational cost. Modifications with respect to standard GANs, which facilitate convergence and increase accuracy, are discussed. The trained network is demonstrated to quantitatively reproduce equilibrium and kinetic properties, including scaling laws, with deviations of a few percent from the exact value. Extrapolation limits and future perspectives are critically discussed.
Related papers
- Equilibrium Matching: Generative Modeling with Implicit Energy-Based Models [52.74448905289362]
EqM is a generative modeling framework built from an equilibrium dynamics perspective.<n>By replacing time-conditional velocities with a unified equilibrium landscape, EqM offers a tighter bridge between flow and energy-based models.
arXiv Detail & Related papers (2025-10-02T17:59:06Z) - Autoregressive Typical Thermal States [0.0]
We introduce an autoregressive framework to calculate finite-temperature properties of a quantum system.<n>By comparing our algorithm to exact results for the spin 1/2 quantum XY chain, we demonstrate that autoregressive typical thermal states are capable of accurately calculating thermal observables.
arXiv Detail & Related papers (2025-08-19T02:20:15Z) - KITINet: Kinetics Theory Inspired Network Architectures with PDE Simulation Approaches [43.872190335490515]
This paper introduces KITINet, a novel architecture that reinterprets feature propagation through the lens of non-equilibrium particle dynamics.<n>At its core, we propose a residual module that models update as the evolution of a particle system.<n>This formulation mimics particle collisions and energy exchange, enabling adaptive feature refinement via physics-informed interactions.
arXiv Detail & Related papers (2025-05-23T13:58:29Z) - Leveraging recurrence in neural network wavefunctions for large-scale simulations of Heisenberg antiferromagnets on the square lattice [1.7489899297384526]
Machine-learning-based variational Monte Carlo simulations are a promising approach for targeting quantum many-body ground states.<n>We employ recurrent neural networks (RNNs) as a variational ans"atze, and leverage their recurrent nature to simulate the ground states of progressively larger systems.<n>We show that we are able to systematically improve the accuracy of the results from our simulations by increasing the training time.
arXiv Detail & Related papers (2025-02-24T13:35:23Z) - Recurrent convolutional neural networks for non-adiabatic dynamics of quantum-classical systems [1.2972104025246092]
We present a RNN model based on convolutional neural networks for modeling the nonlinear non-adiabatic dynamics of hybrid quantum-classical systems.<n> validation studies show that the trained PARC model could reproduce the space-time evolution of a one-dimensional semi-classical Holstein model.
arXiv Detail & Related papers (2024-12-09T16:23:25Z) - Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces novel deep dynamical models designed to represent continuous-time sequences.<n>We train the model using maximum likelihood estimation with Markov chain Monte Carlo.<n> Experimental results on oscillating systems, videos and real-world state sequences (MuJoCo) demonstrate that our model with the learnable energy-based prior outperforms existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Gibbs-Duhem-Informed Neural Networks for Binary Activity Coefficient
Prediction [45.84205238554709]
We propose Gibbs-Duhem-informed neural networks for the prediction of binary activity coefficients at varying compositions.
We include the Gibbs-Duhem equation explicitly in the loss function for training neural networks.
arXiv Detail & Related papers (2023-05-31T07:36:45Z) - Neural net modeling of equilibria in NSTX-U [0.0]
We develop two neural networks relevant to equilibrium and shape control modeling.
Networks include Eqnet, a free-boundary equilibrium solver trained on the EFIT01 reconstruction algorithm, and Pertnet, which is trained on the Gspert code.
We report strong performance for both networks indicating that these models could reliably be used within closed-loop simulations.
arXiv Detail & Related papers (2022-02-28T16:09:58Z) - Role of stochastic noise and generalization error in the time
propagation of neural-network quantum states [0.0]
Neural-network quantum states (NQS) have been shown to be a suitable variational ansatz to simulate out-of-equilibrium dynamics.
We show that stable and accurate time propagation can be achieved in regimes of sufficiently regularized variational dynamics.
arXiv Detail & Related papers (2021-05-03T17:55:09Z) - Quantum Markov Chain Monte Carlo with Digital Dissipative Dynamics on
Quantum Computers [52.77024349608834]
We develop a digital quantum algorithm that simulates interaction with an environment using a small number of ancilla qubits.
We evaluate the algorithm by simulating thermal states of the transverse Ising model.
arXiv Detail & Related papers (2021-03-04T18:21:00Z) - Quantum Generative Adversarial Networks in a Continuous-Variable
Architecture to Simulate High Energy Physics Detectors [0.0]
We introduce and analyze a new prototype of quantum GAN (qGAN) employed in continuous-variable quantum computing.
Two CV qGAN models with a quantum and a classical discriminator have been tested to reproduce calorimeter outputs in a reduced size.
arXiv Detail & Related papers (2021-01-26T23:33:14Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Kernel and Rich Regimes in Overparametrized Models [69.40899443842443]
We show that gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms.
We also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.
arXiv Detail & Related papers (2020-02-20T15:43:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.