Accelerating Multiscale Modeling with Hybrid Solvers: Coupling FEM and Neural Operators with Domain Decomposition
- URL: http://arxiv.org/abs/2504.11383v2
- Date: Wed, 16 Apr 2025 12:26:48 GMT
- Title: Accelerating Multiscale Modeling with Hybrid Solvers: Coupling FEM and Neural Operators with Domain Decomposition
- Authors: Wei Wang, Maryam Hakimzadeh, Haihui Ruan, Somdatta Goswami,
- Abstract summary: This work introduces a novel hybrid framework that integrates physics-informed DeepONet with FEM through domain decomposition.<n>We show that our proposed hybrid solver maintains solution continuity across subdomain interfaces, reduces computational costs by eliminating fine mesh requirements, and mitigates error accumulation in time-dependent simulations.<n>This work bridges the gap between numerical methods and AI-driven surrogates, offering a scalable pathway for high-fidelity simulations in engineering and scientific applications.
- Score: 3.0635300721402228
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Numerical solvers for partial differential equations (PDEs) face challenges balancing computational cost and accuracy, especially in multiscale and dynamic systems. Neural operators can significantly speed up simulations; however, they often face challenges such as error accumulation and limited generalization in multiphysics problems. This work introduces a novel hybrid framework that integrates physics-informed DeepONet with FEM through domain decomposition. The core innovation lies in adaptively coupling FEM and DeepONet subdomains via a Schwarz alternating method. This methodology strategically allocates computationally demanding regions to a pre-trained Deep Operator Network, while the remaining computational domain is solved through FEM. To address dynamic systems, we integrate the Newmark time-stepping scheme directly into the DeepONet, significantly mitigating error accumulation in long-term simulations. Furthermore, an adaptive subdomain evolution enables the ML-resolved region to expand dynamically, capturing emerging fine-scale features without remeshing. The framework's efficacy has been validated across a range of solid mechanics problems, including static, quasi-static, and dynamic regimes, demonstrating accelerated convergence rates (up to 20% improvement compared to FE-FE approaches), while preserving solution fidelity with error < 1%. Our case studies show that our proposed hybrid solver: (1) maintains solution continuity across subdomain interfaces, (2) reduces computational costs by eliminating fine mesh requirements, (3) mitigates error accumulation in time-dependent simulations, and (4) enables automatic adaptation to evolving physical phenomena. This work bridges the gap between numerical methods and AI-driven surrogates, offering a scalable pathway for high-fidelity simulations in engineering and scientific applications.
Related papers
- Implicit Neural Differential Model for Spatiotemporal Dynamics [5.1854032131971195]
We introduce Im-PiNDiff, a novel implicit physics-integrated neural differentiable solver for stabletemporal dynamics.<n>Inspired by deep equilibrium models, Im-PiNDiff advances the state using implicit fixed-point layers, enabling robust long-term simulation.<n>Im-PiNDiff achieves superior predictive performance, enhanced numerical stability, and substantial reductions in memory and cost.
arXiv Detail & Related papers (2025-04-03T04:07:18Z) - Efficient Transformed Gaussian Process State-Space Models for Non-Stationary High-Dimensional Dynamical Systems [49.819436680336786]
Gaussian process state-space models (GPSSMs) have emerged as a powerful framework for modeling dynamical systems.<n>We propose an efficient transformed Gaussian process state-space model (ETGPSSM) to address these limitations.<n>Our approach leverages a single shared Gaussian process (GP) combined with normalizing flows and Bayesian neural networks, enabling efficient modeling of complex, high-dimensional state transitions.
arXiv Detail & Related papers (2025-03-24T03:19:45Z) - Non-overlapping, Schwarz-type Domain Decomposition Method for Physics and Equality Constrained Artificial Neural Networks [0.24578723416255746]
We present a non-overlapping, Schwarz-type domain decomposition method with a generalized interface condition.
Our approach employs physics and equality-constrained artificial neural networks (PECANN) within each subdomain.
A distinct advantage our domain decomposition method is its ability to learn solutions to both Poisson's and Helmholtz equations.
arXiv Detail & Related papers (2024-09-20T16:48:55Z) - A domain decomposition-based autoregressive deep learning model for unsteady and nonlinear partial differential equations [2.7755345520127936]
We propose a domain-decomposition-based deep learning (DL) framework, named CoMLSim, for accurately modeling unsteady and nonlinear partial differential equations (PDEs)<n>The framework consists of two key components: (a) a convolutional neural network (CNN)-based autoencoder architecture and (b) an autoregressive model composed of fully connected layers.
arXiv Detail & Related papers (2024-08-26T17:50:47Z) - Neural-Integrated Meshfree (NIM) Method: A differentiable
programming-based hybrid solver for computational mechanics [1.7132914341329852]
We present the neural-integrated meshfree (NIM) method, a differentiable programming-based hybrid meshfree approach within the field of computational mechanics.
NIM seamlessly integrates traditional physics-based meshfree discretization techniques with deep learning architectures.
Under the NIM framework, we propose two truly meshfree solvers: the strong form-based NIM (S-NIM) and the local variational form-based NIM (V-NIM)
arXiv Detail & Related papers (2023-11-21T17:57:12Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Multilayer Perceptron Based Stress Evolution Analysis under DC Current
Stressing for Multi-segment Wires [8.115870370527324]
Electromigration (EM) is one of the major concerns in the reliability analysis of very large scale integration (VLSI) systems.
Traditional methods are often not sufficiently accurate, leading to undesirable over-design especially in advanced technology nodes.
We propose an approach using multilayer perceptrons (MLP) to compute stress evolution in the interconnect trees during the void nucleation phase.
arXiv Detail & Related papers (2022-05-17T07:38:20Z) - Interfacing Finite Elements with Deep Neural Operators for Fast
Multiscale Modeling of Mechanics Problems [4.280301926296439]
In this work, we explore the idea of multiscale modeling with machine learning and employ DeepONet, a neural operator, as an efficient surrogate of the expensive solver.
DeepONet is trained offline using data acquired from the fine solver for learning the underlying and possibly unknown fine-scale dynamics.
We present various benchmarks to assess accuracy and speedup, and in particular we develop a coupling algorithm for a time-dependent problem.
arXiv Detail & Related papers (2022-02-25T20:46:08Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Combining Differentiable PDE Solvers and Graph Neural Networks for Fluid
Flow Prediction [79.81193813215872]
We develop a hybrid (graph) neural network that combines a traditional graph convolutional network with an embedded differentiable fluid dynamics simulator inside the network itself.
We show that we can both generalize well to new situations and benefit from the substantial speedup of neural network CFD predictions.
arXiv Detail & Related papers (2020-07-08T21:23:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.