Imposing Boundary Conditions on Neural Operators via Learned Function Extensions
- URL: http://arxiv.org/abs/2602.04923v1
- Date: Wed, 04 Feb 2026 08:28:43 GMT
- Title: Imposing Boundary Conditions on Neural Operators via Learned Function Extensions
- Authors: Sepehr Mousavi, Siddhartha Mishra, Laura De Lorenzis,
- Abstract summary: We propose a framework for conditioning neural operators on complex non-homogeneous BCs through function extensions.<n>We benchmark 18 challenging datasets spanning Poisson, linear elasticity, and hyperelasticity problems.<n>Our results demonstrate that learning boundary-to-domain extensions is an effective and practical strategy for imposing complex BCs in existing neural operator frameworks.
- Score: 13.031092961445104
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural operators have emerged as powerful surrogates for the solution of partial differential equations (PDEs), yet their ability to handle general, highly variable boundary conditions (BCs) remains limited. Existing approaches often fail when the solution operator exhibits strong sensitivity to boundary forcings. We propose a general framework for conditioning neural operators on complex non-homogeneous BCs through function extensions. Our key idea is to map boundary data to latent pseudo-extensions defined over the entire spatial domain, enabling any standard operator learning architecture to consume boundary information. The resulting operator, coupled with an arbitrary domain-to-domain neural operator, can learn rich dependencies on complex BCs and input domain functions at the same time. To benchmark this setting, we construct 18 challenging datasets spanning Poisson, linear elasticity, and hyperelasticity problems, with highly variable, mixed-type, component-wise, and multi-segment BCs on diverse geometries. Our approach achieves state-of-the-art accuracy, outperforming baselines by large margins, while requiring no hyperparameter tuning across datasets. Overall, our results demonstrate that learning boundary-to-domain extensions is an effective and practical strategy for imposing complex BCs in existing neural operator frameworks, enabling accurate and robust scientific machine learning models for a broader range of PDE-governed problems.
Related papers
- A Boundary Integral-based Neural Operator for Mesh Deformation [10.460831049056761]
This paper presents an efficient mesh deformation method based on boundary integration and neural operators.<n>A key technical advantage of our framework is the mathematical decoupling of the physical integration process from the geometric representation.<n> Numerical experiments, including large deformations of flexible beams and rigid-body motions of NACA airfoils, confirm the model's high accuracy and strict adherence to the principles of linearity and superposition.
arXiv Detail & Related papers (2026-02-27T06:09:07Z) - Geometric Laplace Neural Operator [12.869633759181417]
We propose a generalized operator learning framework based on a pole-residue decomposition enriched with exponential basis functions.<n>We introduce the Geometric Laplace Neural Operator (GLNO), which embeds the Laplace spectral representation into the eigen-basis of the Laplace-Beltrami operator.<n>We further design a grid-invariant network architecture (GLNONet) that realizes GLNO in practice.
arXiv Detail & Related papers (2025-12-18T11:07:41Z) - From Local Interactions to Global Operators: Scalable Gaussian Process Operator for Physical Systems [7.807210884802377]
We introduce a novel, scalable GPO that capitalizes on sparsity, locality, and structural information through judicious kernel design.<n>We demonstrate that our framework consistently achieves high accuracy across varying discretization scales.
arXiv Detail & Related papers (2025-06-18T22:40:52Z) - FB-HyDON: Parameter-Efficient Physics-Informed Operator Learning of Complex PDEs via Hypernetwork and Finite Basis Domain Decomposition [0.0]
Deep operator networks (DeepONet) and neural operators have gained significant attention for their ability to map infinite-dimensional function spaces and perform zero-shot super-resolution.
We introduce Finite Basis Physics-Informed HyperDeepONet (FB-HyDON), an advanced operator architecture featuring intrinsic domain decomposition.
By leveraging hypernetworks and finite basis functions, FB-HyDON effectively mitigates the training limitations associated with existing physics-informed operator learning methods.
arXiv Detail & Related papers (2024-09-13T21:41:59Z) - Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs [93.82811501035569]
We introduce a new data efficient and highly parallelizable operator learning approach with reduced memory requirement and better generalization.
MG-TFNO scales to large resolutions by leveraging local and global structures of full-scale, real-world phenomena.
We demonstrate superior performance on the turbulent Navier-Stokes equations where we achieve less than half the error with over 150x compression.
arXiv Detail & Related papers (2023-09-29T20:18:52Z) - Neural Operators for Accelerating Scientific Simulations and Design [85.89660065887956]
An AI framework, known as Neural Operators, presents a principled framework for learning mappings between functions defined on continuous domains.
Neural Operators can augment or even replace existing simulators in many applications, such as computational fluid dynamics, weather forecasting, and material modeling.
arXiv Detail & Related papers (2023-09-27T00:12:07Z) - Neural Fields with Hard Constraints of Arbitrary Differential Order [61.49418682745144]
We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
arXiv Detail & Related papers (2023-06-15T08:33:52Z) - Understanding Overparameterization in Generative Adversarial Networks [56.57403335510056]
Generative Adversarial Networks (GANs) are used to train non- concave mini-max optimization problems.
A theory has shown the importance of the gradient descent (GD) to globally optimal solutions.
We show that in an overized GAN with a $1$-layer neural network generator and a linear discriminator, the GDA converges to a global saddle point of the underlying non- concave min-max problem.
arXiv Detail & Related papers (2021-04-12T16:23:37Z) - Deep Parametric Continuous Convolutional Neural Networks [92.87547731907176]
Parametric Continuous Convolution is a new learnable operator that operates over non-grid structured data.
Our experiments show significant improvement over the state-of-the-art in point cloud segmentation of indoor and outdoor scenes.
arXiv Detail & Related papers (2021-01-17T18:28:23Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.