An Evolutionary Multi-objective Optimization for Replica-Exchange-based Physics-informed Operator Learning Network
- URL: http://arxiv.org/abs/2509.00663v1
- Date: Sun, 31 Aug 2025 02:17:59 GMT
- Title: An Evolutionary Multi-objective Optimization for Replica-Exchange-based Physics-informed Operator Learning Network
- Authors: Binghang Lu, Changhong Mou, Guang Lin,
- Abstract summary: We propose an evolutionary Multi-objective Optimization for Replica-based Physics-informed Operator learning Network.<n>Our framework consistently outperforms the general operator learning methods in accuracy, noise, and the ability to quantify uncertainty.
- Score: 7.1950116347185995
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose an evolutionary Multi-objective Optimization for Replica-Exchange-based Physics-informed Operator learning Network, which is a novel operator learning network to efficiently solve parametric partial differential equations. In forward and inverse settings, this operator learning network only admits minimum requirement of noisy observational data. While physics-informed neural networks and operator learning approaches such as Deep Operator Networks and Fourier Neural Operators offer promising alternatives to traditional numerical solvers, they struggle with balancing operator and physics losses, maintaining robustness under noisy or sparse data, and providing uncertainty quantification. The proposed framework addresses these limitations by integrating: (i) evolutionary multi-objective optimization to adaptively balance operator and physics-based losses in the Pareto front; (ii) replica exchange stochastic gradient Langevin dynamics to improve global parameter-space exploration and accelerate convergence; and (iii) built-in Bayesian uncertainty quantification from stochastic sampling. The proposed operator learning method is tested numerically on several different problems including one-dimensional Burgers equation and the time-fractional mixed diffusion-wave equation. The results indicate that our framework consistently outperforms the general operator learning methods in accuracy, noise robustness, and the ability to quantify uncertainty.
Related papers
- Tackling multiphysics problems via finite element-guided physics-informed operator learning [0.0]
This work presents a finite element-guided physics-informed operator learning framework for multiphysics problems.<n>The proposed framework learns a mapping from the input parameter space to the solution space with a weighted residual formulation based on the finite element method.<n>The present framework for multiphysics problems is verified on nonlinear thermo-mechanical problems.
arXiv Detail & Related papers (2026-03-02T03:52:51Z) - Faster Predictive Coding Networks via Better Initialization [52.419343840654186]
We propose a new technique for predictive coding networks that aims to preserve the iterative progress made on previous training samples.<n>Our experiments demonstrate substantial improvements in convergence speed and final test loss in both supervised and unsupervised settings.
arXiv Detail & Related papers (2026-01-28T08:52:19Z) - Revisiting Orbital Minimization Method for Neural Operator Decomposition [19.86950069790711]
We revisit a classical optimization framework known as the emphorbital method (OMM) originally proposed in the 1990s for solving eigenvalue problems in computational chemistry.<n>We adapt this framework to train neural networks to decompose positive semidefinite operators, and demonstrate its practical advantages across a range of benchmark tasks.
arXiv Detail & Related papers (2025-10-24T18:26:18Z) - How deep is your network? Deep vs. shallow learning of transfer operators [0.4473327661758546]
We propose a randomized neural network approach called RaNNDy for learning transfer operators and their spectral decompositions from data.<n>The main advantage is that without a noticeable reduction in accuracy, this approach significantly reduces the training time and resources.<n>We present results for different dynamical operators, including Koopman and Perron-Frobenius operators, which have important applications in analyzing the behavior of complex dynamical systems.
arXiv Detail & Related papers (2025-09-24T09:38:42Z) - Efficient Parametric SVD of Koopman Operator for Stochastic Dynamical Systems [51.54065545849027]
The Koopman operator provides a principled framework for analyzing nonlinear dynamical systems.<n>VAMPnet and DPNet have been proposed to learn the leading singular subspaces of the Koopman operator.<n>We propose a scalable and conceptually simple method for learning the top-$k$ singular functions of the Koopman operator.
arXiv Detail & Related papers (2025-07-09T18:55:48Z) - OmniFluids: Physics Pre-trained Modeling of Fluid Dynamics [25.066485418709114]
We propose OmniFluids, a pure physics pre-trained model that captures fundamental fluid dynamics laws and adapts efficiently to diverse downstream tasks.<n>We develop a training framework combining physics-only pre-training, coarse-grid operator distillation, and few-shot fine-tuning.<n>Tests show that OmniFluids outperforms state-of-the-art AI-driven methods in terms of flow field prediction and statistics.
arXiv Detail & Related papers (2025-06-12T16:23:02Z) - DeepONet as a Multi-Operator Extrapolation Model: Distributed Pretraining with Physics-Informed Fine-Tuning [6.635683993472882]
We propose a novel fine-tuning method to achieve multi-operator learning.
Our approach combines distributed learning to integrate data from various operators in pre-training, while physics-informed methods enable zero-shot fine-tuning.
arXiv Detail & Related papers (2024-11-11T18:58:46Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
Layer-wise Feedback feedback (LFP) is a novel training principle for neural network-like predictors.<n>LFP decomposes a reward to individual neurons based on their respective contributions.<n>Our method then implements a greedy reinforcing approach helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Koopman Kernel Regression [6.116741319526748]
We show that Koopman operator theory offers a beneficial paradigm for characterizing forecasts via linear time-invariant (LTI) ODEs.
We derive a universal Koopman-invariant kernel reproducing Hilbert space (RKHS) that solely spans transformations into LTI dynamical systems.
Our experiments demonstrate superior forecasting performance compared to Koopman operator and sequential data predictors.
arXiv Detail & Related papers (2023-05-25T16:22:22Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Variational operator learning: A unified paradigm marrying training
neural operators and solving partial differential equations [9.148052787201797]
We propose a novel paradigm that provides a unified framework of training neural operators and solving PDEs with the variational form.
With a label-free training set and a 5-label-only shift set, VOL learns solution operators with its test errors decreasing in a power law with respect to the amount of unlabeled data.
arXiv Detail & Related papers (2023-04-09T13:20:19Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.