Accelerating discrete dislocation dynamics simulations with graph neural
networks
- URL: http://arxiv.org/abs/2208.03296v2
- Date: Fri, 21 Apr 2023 19:58:55 GMT
- Title: Accelerating discrete dislocation dynamics simulations with graph neural
networks
- Authors: Nicolas Bertin, Fei Zhou
- Abstract summary: We introduce a new DDD-GNN framework in which the expensive time-integration of dislocation motion is entirely substituted by a graph neural network model trained on DDD trajectories.
We show that the DDD-GNN model is stable and reproduces very well unseen ground-truth DDD simulation responses for a range of straining rates and obstacle densities.
- Score: 5.647516208808728
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Discrete dislocation dynamics (DDD) is a widely employed computational method
to study plasticity at the mesoscale that connects the motion of dislocation
lines to the macroscopic response of crystalline materials. However, the
computational cost of DDD simulations remains a bottleneck that limits its
range of applicability. Here, we introduce a new DDD-GNN framework in which the
expensive time-integration of dislocation motion is entirely substituted by a
graph neural network (GNN) model trained on DDD trajectories. As a first
application, we demonstrate the feasibility and potential of our method on a
simple yet relevant model of a dislocation line gliding through an array of
obstacles. We show that the DDD-GNN model is stable and reproduces very well
unseen ground-truth DDD simulation responses for a range of straining rates and
obstacle densities, without the need to explicitly compute nodal forces or
dislocation mobilities during time-integration. Our approach opens new
promising avenues to accelerate DDD simulations and to incorporate more complex
dislocation motion behaviors.
Related papers
- Principal Component Flow Map Learning of PDEs from Incomplete, Limited, and Noisy Data [0.0]
We present a computational technique for modeling the evolution of dynamical systems in a reduced basis.
We focus on the challenging problem of modeling partially-observed partial differential equations (PDEs) on high-dimensional non-uniform grids.
We present a neural network structure that is suitable for PDE modeling with noisy and limited data available only on a subset of the state variables or computational domain.
arXiv Detail & Related papers (2024-07-15T16:06:20Z) - Constrained Exploration via Reflected Replica Exchange Stochastic Gradient Langevin Dynamics [10.290462113848054]
ReSGLD is an effective tool for non-vinquadatic learning tasks in large-scale datasets.
We explore the role of the simulation efficiency in constrained multi-modal distributions and image classification.
arXiv Detail & Related papers (2024-05-13T15:25:03Z) - Accelerating Scalable Graph Neural Network Inference with Node-Adaptive
Propagation [80.227864832092]
Graph neural networks (GNNs) have exhibited exceptional efficacy in a diverse array of applications.
The sheer size of large-scale graphs presents a significant challenge to real-time inference with GNNs.
We propose an online propagation framework and two novel node-adaptive propagation methods.
arXiv Detail & Related papers (2023-10-17T05:03:00Z) - Generative Modeling with Phase Stochastic Bridges [49.4474628881673]
Diffusion models (DMs) represent state-of-the-art generative models for continuous inputs.
We introduce a novel generative modeling framework grounded in textbfphase space dynamics
Our framework demonstrates the capability to generate realistic data points at an early stage of dynamics propagation.
arXiv Detail & Related papers (2023-10-11T18:38:28Z) - Learning dislocation dynamics mobility laws from large-scale MD
simulations [6.058101483996012]
We introduce a machine-learning (ML) framework to streamline the development of data-driven mobility laws.
Our GNN mobility implemented in large-scale DDD simulations accurately reproduces the challenging tension/compression asymmetry observed in ground-truth MD simulations.
arXiv Detail & Related papers (2023-09-25T18:16:45Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Coupled and Uncoupled Dynamic Mode Decomposition in Multi-Compartmental
Systems with Applications to Epidemiological and Additive Manufacturing
Problems [58.720142291102135]
We show that Dynamic Decomposition (DMD) may be a powerful tool when applied to nonlinear problems.
In particular, we show interesting numerical applications to a continuous delayed-SIRD model for Covid-19.
arXiv Detail & Related papers (2021-10-12T21:42:14Z) - The Limiting Dynamics of SGD: Modified Loss, Phase Space Oscillations,
and Anomalous Diffusion [29.489737359897312]
We study the limiting dynamics of deep neural networks trained with gradient descent (SGD)
We show that the key ingredient driving these dynamics is not the original training loss, but rather the combination of a modified loss, which implicitly regularizes the velocity and probability currents, which cause oscillations in phase space.
arXiv Detail & Related papers (2021-07-19T20:18:57Z) - Real-time simulation of parameter-dependent fluid flows through deep
learning-based reduced order models [0.2538209532048866]
Reduced order models (ROMs) provide reliable approximations to parameter-dependent fluid dynamics problems in rapid times.
Deep learning (DL)-based ROMs overcome all these limitations by learning in a non-intrusive way both the nonlinear trial manifold and the reduced dynamics.
The resulting POD-DL-ROMs are shown to provide accurate results in almost real-time for the flow around a cylinder benchmark, the fluid-structure interaction between an elastic beam attached to a fixed, rigid block and a laminar incompressible flow, and the blood flow in a cerebral aneurysm.
arXiv Detail & Related papers (2021-06-10T13:07:33Z) - Dynamic Mode Decomposition in Adaptive Mesh Refinement and Coarsening
Simulations [58.720142291102135]
Dynamic Mode Decomposition (DMD) is a powerful data-driven method used to extract coherent schemes.
This paper proposes a strategy to enable DMD to extract from observations with different mesh topologies and dimensions.
arXiv Detail & Related papers (2021-04-28T22:14:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.