Learning Macroscopic Dynamics from Partial Microscopic Observations
- URL: http://arxiv.org/abs/2410.23938v2
- Date: Fri, 01 Nov 2024 04:28:59 GMT
- Title: Learning Macroscopic Dynamics from Partial Microscopic Observations
- Authors: Mengyi Chen, Qianxiao Li,
- Abstract summary: We propose a method to learn macroscopic dynamics requiring only force computations on a subset of microscopic coordinates.
Our method relies on a sparsity assumption: the force on each microscopic coordinate relies only on a small number of other coordinates.
We demonstrate the accuracy, force efficiency, and robustness of our method on learning macroscopic closure models from a variety of microscopic systems.
- Score: 12.707050104493218
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Macroscopic observables of a system are of keen interest in real applications such as the design of novel materials. Current methods rely on microscopic trajectory simulations, where the forces on all microscopic coordinates need to be computed or measured. However, this can be computationally prohibitive for realistic systems. In this paper, we propose a method to learn macroscopic dynamics requiring only force computations on a subset of the microscopic coordinates. Our method relies on a sparsity assumption: the force on each microscopic coordinate relies only on a small number of other coordinates. The main idea of our approach is to map the training procedure on the macroscopic coordinates back to the microscopic coordinates, on which partial force computations can be used as stochastic estimation to update model parameters. We provide a theoretical justification of this under suitable conditions. We demonstrate the accuracy, force computation efficiency, and robustness of our method on learning macroscopic closure models from a variety of microscopic systems, including those modeled by partial differential equations or molecular dynamics simulations.
Related papers
- A Microstructure-based Graph Neural Network for Accelerating Multiscale
Simulations [0.0]
We introduce an alternative surrogate modeling strategy that allows for keeping the multiscale nature of the problem.
We achieve this by predicting full-field microscopic strains using a graph neural network (GNN) while retaining the microscopic material model.
We demonstrate for several challenging scenarios that the surrogate can predict complex macroscopic stress-strain paths.
arXiv Detail & Related papers (2024-02-20T15:54:24Z) - Towards Predicting Equilibrium Distributions for Molecular Systems with
Deep Learning [60.02391969049972]
We introduce a novel deep learning framework, called Distributional Graphormer (DiG), in an attempt to predict the equilibrium distribution of molecular systems.
DiG employs deep neural networks to transform a simple distribution towards the equilibrium distribution, conditioned on a descriptor of a molecular system.
arXiv Detail & Related papers (2023-06-08T17:12:08Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Bayesian Active Learning for Scanning Probe Microscopy: from Gaussian
Processes to Hypothesis Learning [0.0]
We discuss the basic principles of Bayesian active learning and illustrate its applications for scanning probe microscopes (SPMs)
These frameworks allow for the use of prior data, the discovery of specific functionalities as encoded in spectral data, and exploration of physical laws manifesting during the experiment.
arXiv Detail & Related papers (2022-05-30T23:01:41Z) - Gaussian Moments as Physically Inspired Molecular Descriptors for
Accurate and Scalable Machine Learning Potentials [0.0]
We propose a machine learning method for constructing high-dimensional potential energy surfaces based on feed-forward neural networks.
The accuracy of the developed approach in representing both chemical and configurational spaces is comparable to the one of several established machine learning models.
arXiv Detail & Related papers (2021-09-15T16:46:46Z) - Using machine-learning modelling to understand macroscopic dynamics in a
system of coupled maps [0.0]
We consider a case study the macroscopic motion emerging from a system of globally coupled maps.
We build a coarse-grained Markov process for the macroscopic dynamics both with a machine learning approach and with a direct numerical computation of the transition probability of the coarse-grained process.
We are able to infer important information about the effective dimension of the attractor, the persistence of memory effects and the multi-scale structure of the dynamics.
arXiv Detail & Related papers (2020-11-08T15:38:12Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Macro-to-micro quantum mapping and the emergence of nonlinearity [58.720142291102135]
We show how to assign to a system a microscopic description that abides by all macroscopic constraints.
As a by-product, we show how effective nonlinear dynamics can emerge from the linear quantum evolution.
arXiv Detail & Related papers (2020-07-28T17:22:49Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z) - Differentiable Molecular Simulations for Control and Learning [0.9208007322096533]
We develop new routes for parameterizing Hamiltonians to infer macroscopic models and develop control protocols.
We demonstrate how this can be achieved using differentiable simulations where bulk target observables and simulation outcomes can be analytically differentiated with respect to Hamiltonians.
arXiv Detail & Related papers (2020-02-27T04:35:19Z) - Fast approximations in the homogeneous Ising model for use in scene
analysis [61.0951285821105]
We provide accurate approximations that make it possible to numerically calculate quantities needed in inference.
We show that our approximation formulae are scalable and unfazed by the size of the Markov Random Field.
The practical import of our approximation formulae is illustrated in performing Bayesian inference in a functional Magnetic Resonance Imaging activation detection experiment, and also in likelihood ratio testing for anisotropy in the spatial patterns of yearly increases in pistachio tree yields.
arXiv Detail & Related papers (2017-12-06T14:24:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.