DGNet: Discrete Green Networks for Data-Efficient Learning of Spatiotemporal PDEs
- URL: http://arxiv.org/abs/2603.01762v1
- Date: Mon, 02 Mar 2026 11:40:27 GMT
- Title: DGNet: Discrete Green Networks for Data-Efficient Learning of Spatiotemporal PDEs
- Authors: Yingjie Tan, Quanming Yao, Yaqing Wang,
- Abstract summary: We propose DGNet, a network for data-efficient learning of PtemporalDEs.<n>It embeds the superposition principle into the hybrid physics-neural architecture, which reduces the burden of learning physical priors from data.<n>It consistently achieves state-of-the-art accuracy using only tens of training trajectories.
- Score: 33.03129178100678
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spatiotemporal partial differential equations (PDEs) underpin a wide range of scientific and engineering applications. Neural PDE solvers offer a promising alternative to classical numerical methods. However, existing approaches typically require large numbers of training trajectories, while high-fidelity PDE data are expensive to generate. Under limited data, their performance degrades substantially, highlighting their low data efficiency. A key reason is that PDE dynamics embody strong structural inductive biases that are not explicitly encoded in neural architectures, forcing models to learn fundamental physical structure from data. A particularly salient manifestation of this inefficiency is poor generalization to unseen source terms. In this work, we revisit Green's function theory-a cornerstone of PDE theory-as a principled source of structural inductive bias for PDE learning. Based on this insight, we propose DGNet, a discrete Green network for data-efficient learning of spatiotemporal PDEs. The key idea is to transform the Green's function into a graph-based discrete formulation, and embed the superposition principle into the hybrid physics-neural architecture, which reduces the burden of learning physical priors from data, thereby improving sample efficiency. Across diverse spatiotemporal PDE scenarios, DGNet consistently achieves state-of-the-art accuracy using only tens of training trajectories. Moreover, it exhibits robust zero-shot generalization to unseen source terms, serving as a stress test that highlights its data-efficient structural design.
Related papers
- PEGNet: A Physics-Embedded Graph Network for Long-Term Stable Multiphysics Simulation [8.95344024479836]
Physical phenomena governed by partial differential equations (PDEs) are important for scientific and engineering progress.<n>PEGNet is a physics-Embedded Graph Network that incorporates PDE-guided message passing to the redesign graph neural network architecture.<n>We show significant improvements in long-term prediction accuracy and physical consistency over existing methods.
arXiv Detail & Related papers (2025-11-11T19:02:16Z) - CodePDE: An Inference Framework for LLM-driven PDE Solver Generation [57.15474515982337]
Partial differential equations (PDEs) are fundamental to modeling physical systems.<n>Traditional numerical solvers rely on expert knowledge to implement and are computationally expensive.<n>We introduce CodePDE, the first inference framework for generating PDE solvers using large language models.
arXiv Detail & Related papers (2025-05-13T17:58:08Z) - Paving the way for scientific foundation models: enhancing generalization and robustness in PDEs with constraint-aware pre-training [49.8035317670223]
A scientific foundation model (SciFM) is emerging as a promising tool for learning transferable representations across diverse domains.<n>We propose incorporating PDE residuals into pre-training either as the sole learning signal or in combination with data loss to compensate for limited or infeasible training data.<n>Our results show that pre-training with PDE constraints significantly enhances generalization, outperforming models trained solely on solution data.
arXiv Detail & Related papers (2025-03-24T19:12:39Z) - Mechanistic PDE Networks for Discovery of Governing Equations [52.492158106791365]
We present Mechanistic PDE Networks, a model for discovery of partial differential equations from data.<n>The represented PDEs are then solved and decoded for specific tasks.<n>We develop a native, GPU-capable, parallel, sparse, and differentiable multigrid solver specialized for linear partial differential equations.
arXiv Detail & Related papers (2025-02-25T17:21:44Z) - PhyMPGN: Physics-encoded Message Passing Graph Network for spatiotemporal PDE systems [31.006807854698376]
We propose a new graph learning approach, namely, Physics-encoded Message Passing Graph Network (PhyMPGN)<n>We incorporate a GNN into a numerical integrator to approximate the temporal marching of partialtemporal dynamics for a given PDE system.<n>PhyMPGN is capable of accurately predicting various types of operatortemporal dynamics on coarse unstructured meshes.
arXiv Detail & Related papers (2024-10-02T08:54:18Z) - Unisolver: PDE-Conditional Transformers Towards Universal Neural PDE Solvers [53.79279286773326]
We present Unisolver, a novel Transformer model trained on diverse data and conditioned on diverse PDEs.<n>Unisolver achieves consistent state-of-the-art on three challenging large-scale benchmarks, showing impressive performance and generalizability.
arXiv Detail & Related papers (2024-05-27T15:34:35Z) - Physics-informed Discretization-independent Deep Compositional Operator Network [1.2430809884830318]
We introduce a novel physics-informed model architecture which can generalize to various discrete representations of PDE parameters and irregular domain shapes.
Inspired by deep operator neural networks, our model involves a discretization-independent learning of parameter embedding repeatedly.
Numerical results demonstrate the accuracy and efficiency of the proposed method.
arXiv Detail & Related papers (2024-04-21T12:41:30Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - Discovering Nonlinear PDEs from Scarce Data with Physics-encoded
Learning [11.641708412097659]
We propose a physics-encoded discrete learning framework for discovering PDEs from noisy and scarce data.
We validate our method on three nonlinear PDE systems.
arXiv Detail & Related papers (2022-01-28T07:49:48Z) - PhyCRNet: Physics-informed Convolutional-Recurrent Network for Solving
Spatiotemporal PDEs [8.220908558735884]
Partial differential equations (PDEs) play a fundamental role in modeling and simulating problems across a wide range of disciplines.
Recent advances in deep learning have shown the great potential of physics-informed neural networks (NNs) to solve PDEs as a basis for data-driven inverse analysis.
We propose the novel physics-informed convolutional-recurrent learning architectures (PhyCRNet and PhCRyNet-s) for solving PDEs without any labeled data.
arXiv Detail & Related papers (2021-06-26T22:22:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.