Generalized Spherical Neural Operators: Green's Function Formulation
- URL: http://arxiv.org/abs/2512.10723v1
- Date: Thu, 11 Dec 2025 15:05:33 GMT
- Title: Generalized Spherical Neural Operators: Green's Function Formulation
- Authors: Hao Tang, Hao Chen, Chao Li,
- Abstract summary: We propose a general operator-design framework based on the designable spherical Green's function and its harmonic expansion.<n>We develop GSHNet, a hierarchical architecture that combines multi-scale spectral modeling with spherical up-down sampling.<n>Our results position GSNO as a principled and general framework for spherical operator learning, bridging rigorous theory with real-world complexity.
- Score: 15.000285739440466
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural operators offer powerful approaches for solving parametric partial differential equations, but extending them to spherical domains remains challenging due to the need to preserve intrinsic geometry while avoiding distortions that break rotational consistency. Existing spherical operators rely on rotational equivariance but often lack the flexibility for real-world complexity. We propose a general operator-design framework based on the designable spherical Green's function and its harmonic expansion, establishing a solid operator-theoretic foundation for spherical learning. Based on this, we propose an absolute and relative position-dependent Green's function that enables flexible balance of equivariance and invariance for real-world modeling. The resulting operator, Green's-function Spherical Neural Operator (GSNO) with a novel spectral learning method, can adapt to anisotropic, constraint-rich systems while retaining spectral efficiency. To exploit GSNO, we develop GSHNet, a hierarchical architecture that combines multi-scale spectral modeling with spherical up-down sampling, enhancing global feature representation. Evaluations on diffusion MRI, shallow water dynamics, and global weather forecasting, GSNO and GSHNet consistently outperform state-of-the-art methods. Our results position GSNO as a principled and general framework for spherical operator learning, bridging rigorous theory with real-world complexity.
Related papers
- PhyG-MoE: A Physics-Guided Mixture-of-Experts Framework for Energy-Efficient GNSS Interference Recognition [49.955269674859004]
This paper introduces PhyG-MoE (Physics-Guided Mixture-of-Experts), a framework designed to align model capacity with signal complexity.<n>Unlike static architectures, the proposed system employs a spectrum-based gating mechanism that routes signals based on their spectral feature entanglement.<n>A high-capacity TransNeXt expert is activated on-demand to disentangle complex features in saturated scenarios, while lightweight experts handle fundamental signals to minimize latency.
arXiv Detail & Related papers (2026-01-19T07:57:52Z) - Neural Operators for Biomedical Spherical Heterogeneity [17.99803254208791]
We introduce a designable Green's function framework (DGF) to provide new spherical operator solution strategy.<n>Based on DGF, we propose Green's-Function Spherical Neural Operator (GSNO) fusing 3 operator solutions.<n>GSNO can adapt to real-world heterogeneous systems with nuisance variability and anisotropy.
arXiv Detail & Related papers (2026-01-07T04:01:25Z) - Geometric Laplace Neural Operator [12.869633759181417]
We propose a generalized operator learning framework based on a pole-residue decomposition enriched with exponential basis functions.<n>We introduce the Geometric Laplace Neural Operator (GLNO), which embeds the Laplace spectral representation into the eigen-basis of the Laplace-Beltrami operator.<n>We further design a grid-invariant network architecture (GLNONet) that realizes GLNO in practice.
arXiv Detail & Related papers (2025-12-18T11:07:41Z) - Fourier Neural Operators Explained: A Practical Perspective [75.12291469255794]
The Fourier Neural Operator (FNO) has become the most influential and widely adopted due to its elegant spectral formulation.<n>This guide aims to establish a clear and reliable framework for applying FNOs effectively across diverse scientific and engineering fields.
arXiv Detail & Related papers (2025-12-01T08:56:21Z) - Equivariant U-Shaped Neural Operators for the Cahn-Hilliard Phase-Field Model [4.79907962230318]
We show that an equivariant U-shaped neural operator (E-UNO) can learn the evolution of the phase-field variable from short histories of past dynamics.<n>By encoding symmetry and scale hierarchy, the model generalizes better, requires less training data, and yields physically consistent dynamics.
arXiv Detail & Related papers (2025-09-01T09:25:31Z) - GFocal: A Global-Focal Neural Operator for Solving PDEs on Arbitrary Geometries [5.323843026995587]
Transformer-based neural operators have emerged as promising surrogate solvers for partial differential equations.<n>We propose GFocal, a method that enforces simultaneous global and local feature learning and fusion.<n>Experiments show that GFocal achieves state-of-the-art performance with an average 15.2% relative gain in five out of six benchmarks.
arXiv Detail & Related papers (2025-08-06T14:02:39Z) - SG-Blend: Learning an Interpolation Between Improved Swish and GELU for Robust Neural Representations [8.276787575807392]
This work introduces SG-Blend, a novel activation function that blends our proposed SSwish and the established GELU.<n>By adaptively blending these constituent functions via learnable parameters, SG-Blend aims to harness their complementary strengths.
arXiv Detail & Related papers (2025-05-29T18:48:18Z) - NeuralGrok: Accelerate Grokking by Neural Gradient Transformation [54.65707216563953]
We propose NeuralGrok, a gradient-based approach that learns an optimal gradient transformation to accelerate generalization of transformers in arithmetic tasks.<n>Our experiments demonstrate that NeuralGrok significantly accelerates generalization, particularly in challenging arithmetic tasks.<n>We also show that NeuralGrok promotes a more stable training paradigm, constantly reducing the model's complexity.
arXiv Detail & Related papers (2025-04-24T04:41:35Z) - Herglotz-NET: Implicit Neural Representation of Spherical Data with Harmonic Positional Encoding [4.2412715094420665]
Implicit neural representations (INRs) have emerged as a promising alternative for high-fidelity data representation.<n>Herglotz-NET (HNET) is a novel INR architecture that employs a harmonic positional encoding based on complex Herglotz mappings.<n>Our results establish HNET as a scalable and flexible framework for accurate modeling of spherical data.
arXiv Detail & Related papers (2025-02-19T14:40:02Z) - On the Convergence of (Stochastic) Gradient Descent for Kolmogorov--Arnold Networks [56.78271181959529]
Kolmogorov--Arnold Networks (KANs) have gained significant attention in the deep learning community.
Empirical investigations demonstrate that KANs optimized via gradient descent (SGD) are capable of achieving near-zero training loss.
arXiv Detail & Related papers (2024-10-10T15:34:10Z) - DimINO: Dimension-Informed Neural Operator Learning [41.37905663176428]
DimINO is a framework inspired by dimensional analysis.<n>It can be seamlessly integrated into existing neural operator architectures.<n>It achieves up to 76.3% performance gain on PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Spherical Fourier Neural Operators: Learning Stable Dynamics on the
Sphere [53.63505583883769]
We introduce Spherical FNOs (SFNOs) for learning operators on spherical geometries.
SFNOs have important implications for machine learning-based simulation of climate dynamics.
arXiv Detail & Related papers (2023-06-06T16:27:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.