Point-Calibrated Spectral Neural Operators
- URL: http://arxiv.org/abs/2410.11382v1
- Date: Tue, 15 Oct 2024 08:19:39 GMT
- Title: Point-Calibrated Spectral Neural Operators
- Authors: Xihang Yue, Linchao Zhu, Yi Yang,
- Abstract summary: We introduce Point-Calibrated Spectral Transform, which learns operator mappings by approximating functions with the point-level adaptive spectral basis.
Point-Calibrated Spectral Neural Operators learn operator mappings by approximating functions with the point-level adaptive spectral basis.
- Score: 54.13671100638092
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Two typical neural models have been extensively studied for operator learning, learning in spatial space via attention mechanism or learning in spectral space via spectral analysis technique such as Fourier Transform. Spatial learning enables point-level flexibility but lacks global continuity constraint, while spectral learning enforces spectral continuity prior but lacks point-wise adaptivity. This work innovatively combines the continuity prior and the point-level flexibility, with the introduced Point-Calibrated Spectral Transform. It achieves this by calibrating the preset spectral eigenfunctions with the predicted point-wise frequency preference via neural gate mechanism. Beyond this, we introduce Point-Calibrated Spectral Neural Operators, which learn operator mappings by approximating functions with the point-level adaptive spectral basis, thereby not only preserving the benefits of spectral prior but also boasting the superior adaptability comparable to the attention mechanism. Comprehensive experiments demonstrate its consistent performance enhancement in extensive PDE solving scenarios.
Related papers
- Avoided-crossings, degeneracies and Berry phases in the spectrum of quantum noise through analytic Bloch-Messiah decomposition [49.1574468325115]
"analytic Bloch-Messiah decomposition" provides approach for characterizing dynamics of quantum optical systems.
We show that avoided crossings arise naturally when a single parameter is varied, leading to hypersensitivity of the singular vectors.
We highlight the possibility of programming the spectral response of photonic systems through the deliberate design of avoided crossings.
arXiv Detail & Related papers (2025-04-29T13:14:15Z) - CARL: Camera-Agnostic Representation Learning for Spectral Image Analysis [75.25966323298003]
Spectral imaging offers promising applications across diverse domains, including medicine and urban scene understanding.
variability in channel dimensionality and captured wavelengths among spectral cameras impede the development of AI-driven methodologies.
We introduce $textbfCARL$, a model for $textbfC$amera-$textbfA$gnostic $textbfR$esupervised $textbfL$ across RGB, multispectral, and hyperspectral imaging modalities.
arXiv Detail & Related papers (2025-04-27T13:06:40Z) - Paving the way for scientific foundation models: enhancing generalization and robustness in PDEs with constraint-aware pre-training [49.8035317670223]
A scientific foundation model (SciFM) is emerging as a promising tool for learning transferable representations across diverse domains.<n>We propose incorporating PDE residuals into pre-training either as the sole learning signal or in combination with data loss to compensate for limited or infeasible training data.<n>Our results show that pre-training with PDE constraints significantly enhances generalization, outperforming models trained solely on solution data.
arXiv Detail & Related papers (2025-03-24T19:12:39Z) - Advancing Generalization in PINNs through Latent-Space Representations [71.86401914779019]
Physics-informed neural networks (PINNs) have made significant strides in modeling dynamical systems governed by partial differential equations (PDEs)<n>We propose PIDO, a novel physics-informed neural PDE solver designed to generalize effectively across diverse PDE configurations.<n>We validate PIDO on a range of benchmarks, including 1D combined equations and 2D Navier-Stokes equations.
arXiv Detail & Related papers (2024-11-28T13:16:20Z) - Exploring the Generalization Capabilities of AID-based Bi-level Optimization [50.3142765099442]
We present two types of bi-level optimization methods: approximate implicit differentiation (AID)-based and iterative differentiation (D)-based approaches.
AID-based methods cannot be easily transformed but must stay in the two-level structure.
We demonstrate the effectiveness and potential applications of these methods on real-world tasks.
arXiv Detail & Related papers (2024-11-25T04:22:17Z) - Pseudospectral method for solving PDEs using Matrix Product States [0.0]
This research focuses on solving time-dependent partial differential equations (PDEs) using matrix product states (MPS)<n>We propose an extension of Hermite Distributed Approximating Functionals (HDAF) to MPS, a highly accurate pseudospectral method for approximating functions of derivatives.
arXiv Detail & Related papers (2024-09-04T17:53:38Z) - Spectral Adapter: Fine-Tuning in Spectral Space [45.72323731094864]
We study the enhancement of current PEFT methods by incorporating the spectral information of pretrained weight matrices into the fine-tuning procedure.
We show through extensive experiments that the proposed fine-tuning model enables better parameter efficiency and tuning performance as well as benefits multi-adapter fusion.
arXiv Detail & Related papers (2024-05-22T19:36:55Z) - An inversion problem for optical spectrum data via physics-guided machine learning [0.0]
We propose the regularized recurrent inference machine (rRIM) to solve the problem of deriving the pairing glue function from measured optical spectra.
The rRIM incorporates physical principles into both training and inference and affords noise robustness, flexibility with out-of-distribution data, and reduced data requirements.
arXiv Detail & Related papers (2024-04-03T01:02:06Z) - Neural Spectral Methods: Self-supervised learning in the spectral domain [0.0]
We present Neural Spectral Methods, a technique to solve parametric Partial Equations (PDEs)
Our method uses bases to learn PDE solutions as mappings between spectral coefficients.
Our experimental results demonstrate that our method significantly outperforms previous machine learning approaches terms of speed and accuracy.
arXiv Detail & Related papers (2023-12-08T18:20:43Z) - Neural oscillators for generalization of physics-informed machine
learning [1.893909284526711]
A primary challenge of physics-informed machine learning (PIML) is its generalization beyond the training domain.
This paper aims to enhance the generalization capabilities of PIML, facilitating practical, real-world applications.
We leverage the inherent causality and temporal sequential characteristics of PDE solutions to fuse PIML models with recurrent neural architectures.
arXiv Detail & Related papers (2023-08-17T13:50:03Z) - ESSAformer: Efficient Transformer for Hyperspectral Image
Super-resolution [76.7408734079706]
Single hyperspectral image super-resolution (single-HSI-SR) aims to restore a high-resolution hyperspectral image from a low-resolution observation.
We propose ESSAformer, an ESSA attention-embedded Transformer network for single-HSI-SR with an iterative refining structure.
arXiv Detail & Related papers (2023-07-26T07:45:14Z) - Boosting the Generalization Ability for Hyperspectral Image Classification using Spectral-spatial Axial Aggregation Transformer [14.594398447576188]
In the hyperspectral image classification (HSIC) task, the most commonly used model validation paradigm is partitioning the training-test dataset through pixel-wise random sampling.
In our experiments, we found that the high accuracy was reached because the training and test datasets share a lot of information.
We propose a spectral-spatial axial aggregation transformer model, namely SaaFormer, that preserves generalization across dataset partitions.
arXiv Detail & Related papers (2023-06-29T07:55:43Z) - Learning Neural Eigenfunctions for Unsupervised Semantic Segmentation [12.91586050451152]
Spectral clustering is a theoretically grounded solution to it where the spectral embeddings for pixels are computed to construct distinct clusters.
Current approaches still suffer from inefficiencies in spectral decomposition and inflexibility in applying them to the test data.
This work addresses these issues by casting spectral clustering as a parametric approach that employs neural network-based eigenfunctions to produce spectral embeddings.
In practice, the neural eigenfunctions are lightweight and take the features from pre-trained models as inputs, improving training efficiency and unleashing the potential of pre-trained models for dense prediction.
arXiv Detail & Related papers (2023-04-06T03:14:15Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Spectral Decomposition Representation for Reinforcement Learning [100.0424588013549]
We propose an alternative spectral method, Spectral Decomposition Representation (SPEDER), that extracts a state-action abstraction from the dynamics without inducing spurious dependence on the data collection policy.
A theoretical analysis establishes the sample efficiency of the proposed algorithm in both the online and offline settings.
An experimental investigation demonstrates superior performance over current state-of-the-art algorithms across several benchmarks.
arXiv Detail & Related papers (2022-08-19T19:01:30Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - Robust, Nonparametric, Efficient Decomposition of Spectral Peaks under
Distortion and Interference [0.0]
We propose a decomposition method for the spectral peaks in an observed frequency spectrum, which is efficiently acquired by utilizing the Fast Fourier Transform.
We model the peaks in spectrum as pseudo-symmetric functions, where the only constraint is a nonincreasing behavior around a central frequency when the distance increases.
Our approach is more robust against arbitrary distortion, interference and noise on the spectrum that may be caused by an observation system.
arXiv Detail & Related papers (2022-04-18T17:08:37Z) - Reconstructing spectral functions via automatic differentiation [30.015034534260664]
Reconstructing spectral functions from Euclidean Green's functions is an important inverse problem in many-body physics.
We propose an automatic differentiation(AD) framework as a generic tool for the spectral reconstruction from propagator observable.
arXiv Detail & Related papers (2021-11-29T18:09:49Z) - Pseudo-Spherical Contrastive Divergence [119.28384561517292]
We propose pseudo-spherical contrastive divergence (PS-CD) to generalize maximum learning likelihood of energy-based models.
PS-CD avoids the intractable partition function and provides a generalized family of learning objectives.
arXiv Detail & Related papers (2021-11-01T09:17:15Z) - On Learning Rates and Schr\"odinger Operators [105.32118775014015]
We present a general theoretical analysis of the effect of the learning rate.
We find that the learning rate tends to zero for a broad non- neural class functions.
arXiv Detail & Related papers (2020-04-15T09:52:37Z) - Convolutional Spectral Kernel Learning [21.595130250234646]
We build an interpretable convolutional spectral kernel network (textttCSKN) based on the inverse Fourier transform.
We derive the generalization error bounds and introduce two regularizers to improve the performance.
Experiments results on real-world datasets validate the effectiveness of the learning framework.
arXiv Detail & Related papers (2020-02-28T14:35:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.