When Network Architecture Meets Physics: Deep Operator Learning for Coupled Multiphysics
- URL: http://arxiv.org/abs/2507.03660v1
- Date: Fri, 04 Jul 2025 15:36:15 GMT
- Title: When Network Architecture Meets Physics: Deep Operator Learning for Coupled Multiphysics
- Authors: Kazuma Kobayashi, Jaewan Park, Qibang Liu, Seid Koric, Diab Abueidda, Syed Bahauddin Alam,
- Abstract summary: We present a comprehensive evaluation of DeepONet variants across three regimes: single-physics, weakly coupled, and strongly coupled multiphysics systems.<n>We consider a reaction-diffusion equation with dual spatial inputs, a nonlinear thermo-electrical problem with bidirectional coupling through temperature-dependent conductivity, and a viscoplastic thermo-mechanical model of steel solidification governed by transient phase-driven interactions.<n>Our results demonstrate that architectural alignment with physical coupling is crucial, whereas multi-branch encodings offer advantages for decoupled or single-physics problems.
- Score: 0.36651088217486427
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Scientific applications increasingly demand real-time surrogate models that can capture the behavior of strongly coupled multiphysics systems driven by multiple input functions, such as in thermo-mechanical and electro-thermal processes. While neural operator frameworks, such as Deep Operator Networks (DeepONets), have shown considerable success in single-physics settings, their extension to multiphysics problems remains poorly understood. In particular, the challenge of learning nonlinear interactions between tightly coupled physical fields has received little systematic attention. This study addresses a foundational question: should the architectural design of a neural operator reflect the strength of physical coupling it aims to model? To answer this, we present the first comprehensive, architecture-aware evaluation of DeepONet variants across three regimes: single-physics, weakly coupled, and strongly coupled multiphysics systems. We consider a reaction-diffusion equation with dual spatial inputs, a nonlinear thermo-electrical problem with bidirectional coupling through temperature-dependent conductivity, and a viscoplastic thermo-mechanical model of steel solidification governed by transient phase-driven interactions. Two operator-learning frameworks, the classical DeepONet and its sequential GRU-based extension, S-DeepONet, are benchmarked using both single-branch and multi-branch (MIONet-style) architectures. Our results demonstrate that architectural alignment with physical coupling is crucial: single-branch networks significantly outperform multi-branch counterparts in strongly coupled settings, whereas multi-branch encodings offer advantages for decoupled or single-physics problems. Once trained, these surrogates achieve full-field predictions up to 1.8e4 times faster than high-fidelity finite-element solvers, without compromising solution accuracy.
Related papers
- Models of Heavy-Tailed Mechanistic Universality [62.107333654304014]
We propose a family of random matrix models to explore attributes that give rise to heavy-tailed behavior in trained neural networks.<n>Under this model, spectral densities with power laws on tails arise through a combination of three independent factors.<n> Implications of our model on other appearances of heavy tails, including neural scaling laws, trajectories, and the five-plus-one phases of neural network training, are discussed.
arXiv Detail & Related papers (2025-06-04T00:55:01Z) - Learning Fluid-Structure Interaction Dynamics with Physics-Informed Neural Networks and Immersed Boundary Methods [0.5991851254194096]
We introduce neural network architectures that combine physics-informed neural networks (PINNs) with the immersed boundary method (IBM) to solve fluid-structure interaction (FSI) problems.<n>Our approach features two distinct architectures: a Single-FSI network with a unified parameter space, and an innovative Eulerian-Lagrangian network that maintains separate parameter spaces for fluid and structure domains.
arXiv Detail & Related papers (2025-05-24T07:07:53Z) - Uncovering Magnetic Phases with Synthetic Data and Physics-Informed Training [0.0]
We investigate the efficient learning of magnetic phases using artificial neural networks trained on synthetic data.<n>We incorporate two key forms of physics-informed guidance to enhance model performance.<n>Our results show that synthetic, structured, and computationally efficient training schemes can reveal physically meaningful phase boundaries.
arXiv Detail & Related papers (2025-05-15T15:16:16Z) - Multi-Physics Simulations via Coupled Fourier Neural Operator [9.839064047196114]
We introduce a novel coupled multi-physics neural operator learning (COMPOL) framework to model interactions among multiple physical processes.<n>Our approach implements feature aggregation through recurrent and attention mechanisms, enabling comprehensive modeling of coupled interactions.<n>Our proposed model demonstrates a two to three-fold improvement in predictive performance compared to existing approaches.
arXiv Detail & Related papers (2025-01-28T20:58:55Z) - Physics-Informed Latent Neural Operator for Real-time Predictions of Complex Physical Systems [0.0]
We propose PI-Latent-NO, a physics-informed latent neural operator framework that integrates governing physics directly into the learning process.<n>Our architecture features two coupled DeepONets trained end-to-end: a Latent-DeepONet that learns a low-dimensional representation of the solution, and a Reconstruction-DeepONet that maps this latent representation back to the physical space.
arXiv Detail & Related papers (2025-01-14T20:38:30Z) - Bond Graphs for multi-physics informed Neural Networks for multi-variate time series [6.775534755081169]
Existing methods are not adapted to tasks with complex multi-physical and multi-domain phenomena.
We propose a Neural Bond graph (NBgE) producing multi-physics-informed representations that can be fed into any task-specific model.
arXiv Detail & Related papers (2024-05-22T12:30:25Z) - Explainable Equivariant Neural Networks for Particle Physics: PELICAN [51.02649432050852]
PELICAN is a novel permutation equivariant and Lorentz invariant aggregator network.
We present a study of the PELICAN algorithm architecture in the context of both tagging (classification) and reconstructing (regression) Lorentz-boosted top quarks.
We extend the application of PELICAN to the tasks of identifying quark-initiated vs.gluon-initiated jets, and a multi-class identification across five separate target categories of jets.
arXiv Detail & Related papers (2023-07-31T09:08:40Z) - Deep learning applied to computational mechanics: A comprehensive
review, state of the art, and the classics [77.34726150561087]
Recent developments in artificial neural networks, particularly deep learning (DL), are reviewed in detail.
Both hybrid and pure machine learning (ML) methods are discussed.
History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics.
arXiv Detail & Related papers (2022-12-18T02:03:00Z) - Unified Field Theory for Deep and Recurrent Neural Networks [56.735884560668985]
We present a unified and systematic derivation of the mean-field theory for both recurrent and deep networks.
We find that convergence towards the mean-field theory is typically slower for recurrent networks than for deep networks.
Our method exposes that Gaussian processes are but the lowest order of a systematic expansion in $1/n$.
arXiv Detail & Related papers (2021-12-10T15:06:11Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Recursive Multi-model Complementary Deep Fusion forRobust Salient Object
Detection via Parallel Sub Networks [62.26677215668959]
Fully convolutional networks have shown outstanding performance in the salient object detection (SOD) field.
This paper proposes a wider'' network architecture which consists of parallel sub networks with totally different network architectures.
Experiments on several famous benchmarks clearly demonstrate the superior performance, good generalization, and powerful learning ability of the proposed wider framework.
arXiv Detail & Related papers (2020-08-07T10:39:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.