Transfer Operator Learning with Fusion Frame
- URL: http://arxiv.org/abs/2408.10458v1
- Date: Tue, 20 Aug 2024 00:03:23 GMT
- Title: Transfer Operator Learning with Fusion Frame
- Authors: Haoyang Jiang, Yongzhi Qu,
- Abstract summary: This work presents a novel framework that enhances the transfer learning capabilities of operator learning models for solving Partial Differential Equations (PDEs)
We introduce an innovative architecture that combines fusion frames with POD-DeepONet, demonstrating superior performance across various PDEs in our experimental analysis.
Our framework addresses the critical challenge of transfer learning in operator learning models, paving the way for adaptable and efficient solutions across a wide range of scientific and engineering applications.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The challenge of applying learned knowledge from one domain to solve problems in another related but distinct domain, known as transfer learning, is fundamental in operator learning models that solve Partial Differential Equations (PDEs). These current models often struggle with generalization across different tasks and datasets, limiting their applicability in diverse scientific and engineering disciplines. This work presents a novel framework that enhances the transfer learning capabilities of operator learning models for solving Partial Differential Equations (PDEs) through the integration of fusion frame theory with the Proper Orthogonal Decomposition (POD)-enhanced Deep Operator Network (DeepONet). We introduce an innovative architecture that combines fusion frames with POD-DeepONet, demonstrating superior performance across various PDEs in our experimental analysis. Our framework addresses the critical challenge of transfer learning in operator learning models, paving the way for adaptable and efficient solutions across a wide range of scientific and engineering applications.
Related papers
- UniFIDES: Universal Fractional Integro-Differential Equation Solvers [0.0]
This work introduces the Universal Fractional Integro-Differential Equation Solvers (UniFIDES)
UniFIDES is a comprehensive machine learning platform designed to expeditiously solve a variety of FIDEs in both forward and inverse directions.
Our results highlight UniFIDES' ability to accurately solve a wide spectrum of integro-differential equations and offer the prospect of using machine learning platforms universally.
arXiv Detail & Related papers (2024-07-01T23:16:34Z) - MultiSTOP: Solving Functional Equations with Reinforcement Learning [56.073581097785016]
We develop MultiSTOP, a Reinforcement Learning framework for solving functional equations in physics.
This new methodology produces actual numerical solutions instead of bounds on them.
arXiv Detail & Related papers (2024-04-23T10:51:31Z) - MergeNet: Knowledge Migration across Heterogeneous Models, Tasks, and Modalities [72.68829963458408]
We present MergeNet, which learns to bridge the gap of parameter spaces of heterogeneous models.
The core mechanism of MergeNet lies in the parameter adapter, which operates by querying the source model's low-rank parameters.
MergeNet is learned alongside both models, allowing our framework to dynamically transfer and adapt knowledge relevant to the current stage.
arXiv Detail & Related papers (2024-04-20T08:34:39Z) - Towards a Foundation Model for Partial Differential Equations: Multi-Operator Learning and Extrapolation [4.286691905364396]
We introduce a multi-modal foundation model for scientific problems, named PROSE-PDE.
Our model is a multi-operator learning approach which can predict future states of systems while concurrently learning the underlying governing equations of the physical system.
arXiv Detail & Related papers (2024-04-18T17:34:20Z) - Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs [85.40198664108624]
We propose Codomain Attention Neural Operator (CoDA-NO) to solve multiphysics problems with PDEs.
CoDA-NO tokenizes functions along the codomain or channel space, enabling self-supervised learning or pretraining of multiple PDE systems.
We find CoDA-NO to outperform existing methods by over 36% on complex downstream tasks with limited data.
arXiv Detail & Related papers (2024-03-19T08:56:20Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - A foundational neural operator that continuously learns without
forgetting [1.0878040851638]
We introduce the concept of the Neural Combinatorial Wavelet Neural Operator (NCWNO) as a foundational model for scientific computing.
The NCWNO is specifically designed to excel in learning from a diverse spectrum of physics and continuously adapt to the solution operators associated with parametric partial differential equations (PDEs)
The proposed foundational model offers two key advantages: (i) it can simultaneously learn solution operators for multiple parametric PDEs, and (ii) it can swiftly generalize to new parametric PDEs with minimal fine-tuning.
arXiv Detail & Related papers (2023-10-29T03:20:10Z) - Neural Operators for Accelerating Scientific Simulations and Design [85.89660065887956]
An AI framework, known as Neural Operators, presents a principled framework for learning mappings between functions defined on continuous domains.
Neural Operators can augment or even replace existing simulators in many applications, such as computational fluid dynamics, weather forecasting, and material modeling.
arXiv Detail & Related papers (2023-09-27T00:12:07Z) - MinT: Boosting Generalization in Mathematical Reasoning via Multi-View
Fine-Tuning [53.90744622542961]
Reasoning in mathematical domains remains a significant challenge for small language models (LMs)
We introduce a new method that exploits existing mathematical problem datasets with diverse annotation styles.
Experimental results show that our strategy enables a LLaMA-7B model to outperform prior approaches.
arXiv Detail & Related papers (2023-07-16T05:41:53Z) - Self-Supervised Learning with Lie Symmetries for Partial Differential
Equations [25.584036829191902]
We learn general-purpose representations of PDEs by implementing joint embedding methods for self-supervised learning (SSL)
Our representation outperforms baseline approaches to invariant tasks, such as regressing the coefficients of a PDE, while also improving the time-stepping performance of neural solvers.
We hope that our proposed methodology will prove useful in the eventual development of general-purpose foundation models for PDEs.
arXiv Detail & Related papers (2023-07-11T16:52:22Z) - GNOT: A General Neural Operator Transformer for Operator Learning [34.79481320566005]
General neural operator transformer (GNOT) is a scalable and effective framework for learning operators.
By designing a novel heterogeneous normalized attention layer, our model is highly flexible to handle multiple input functions and irregular meshes.
The large model capacity of the transformer architecture grants our model the possibility to scale to large datasets and practical problems.
arXiv Detail & Related papers (2023-02-28T07:58:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.