Physics-Informed Neural Networks with Fourier Features and Attention-Driven Decoding
- URL: http://arxiv.org/abs/2510.05385v1
- Date: Mon, 06 Oct 2025 21:23:09 GMT
- Title: Physics-Informed Neural Networks with Fourier Features and Attention-Driven Decoding
- Authors: Rohan Arni, Carlos Blanco,
- Abstract summary: We present the Spectral PINformer (S-Pformer), a encoder of PINformers that addresses two key issues.<n>We find that the encoder is unnecessary for capturing correlations when relying solely on self-attention, thereby reducing parameter count.<n>Our model outperforms encoderde-coder PINformer architectures across all benchmarks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physics-Informed Neural Networks (PINNs) are a useful framework for approximating partial differential equation solutions using deep learning methods. In this paper, we propose a principled redesign of the PINNsformer, a Transformer-based PINN architecture. We present the Spectral PINNSformer (S-Pformer), a refinement of encoder-decoder PINNSformers that addresses two key issues; 1. the redundancy (i.e. increased parameter count) of the encoder, and 2. the mitigation of spectral bias. We find that the encoder is unnecessary for capturing spatiotemporal correlations when relying solely on self-attention, thereby reducing parameter count. Further, we integrate Fourier feature embeddings to explicitly mitigate spectral bias, enabling adaptive encoding of multiscale behaviors in the frequency domain. Our model outperforms encoder-decoder PINNSformer architectures across all benchmarks, achieving or outperforming MLP performance while reducing parameter count significantly.
Related papers
- FUTON: Fourier Tensor Network for Implicit Neural Representations [56.48739018255443]
Implicit neural representations (INRs) have emerged as powerful tools for encoding signals, yet dominant-based designs often suffer from slow convergence, overfitting to noise, and poor extrapolation.<n>We introduce FUTON, which models signals as generalized Fourier series whose coefficients are parameterized by a low-rank tensor decomposition.
arXiv Detail & Related papers (2026-02-13T19:31:44Z) - Iterative Training of Physics-Informed Neural Networks with Fourier-enhanced Features [7.1865646765394215]
Spectral bias, the tendency of neural networks to learn low-frequency features first, is a well-known issue.<n>We propose IFeF-PINN, an algorithm for iterative training of PINNs with Fourier-enhanced features.
arXiv Detail & Related papers (2025-10-22T09:17:37Z) - Neural Decoders for Universal Quantum Algorithms [0.43553942673960666]
We introduce a modular attention-based neural decoder that learns gate-induced correlations.<n>Our decoders achieve fast inference and logical error rates comparable to most-likely-error decoders.<n>These results establish neural decoders as practical, versatile, and high-performance tools for quantum computing.
arXiv Detail & Related papers (2025-09-14T17:51:46Z) - A Lightweight U-like Network Utilizing Neural Memory Ordinary Differential Equations for Slimming the Decoder [13.123714410130912]
We propose three plug-and-play decoders by employing different discretization methods of the neural memory Ordinary Differential Equations (nmODEs)<n>These decoders integrate features at various levels of abstraction by processing information from skip connections and performing numerical operations on upward path.<n>In summary, the proposed discretized nmODEs decoders are capable of reducing the number of parameters by about 20% 50% and FLOPs by up to 74%, while possessing the potential to adapt to all U-like networks.
arXiv Detail & Related papers (2024-12-09T07:21:27Z) - On the Design and Performance of Machine Learning Based Error Correcting Decoders [3.8289109929360245]
We first consider the so-called single-label neural network (SLNN) and the multi-label neural network (MLNN) decoders which have been reported to achieve near maximum likelihood (ML) performance.
We then turn our attention to two transformer-based decoders: the error correction code transformer (ECCT) and the cross-attention message passing transformer (CrossMPT)
arXiv Detail & Related papers (2024-10-21T11:23:23Z) - Spectral Informed Neural Network: An Efficient and Low-Memory PINN [3.8534287291074354]
We propose a spectral-based neural network that substitutes the differential operator with a multiplication.
Compared to the PINNs, our approach requires lower memory and shorter training time.
We provide two strategies to train networks by their spectral information.
arXiv Detail & Related papers (2024-08-29T10:21:00Z) - Parametric Encoding with Attention and Convolution Mitigate Spectral Bias of Neural Partial Differential Equation Solvers [0.0]
Parametric Grid Convolutional Attention Networks (PGCANs) are used to solve partial differential equations (PDEs)
PGCANs parameterize the input space with a grid-based encoder whose parameters are connected to the output via a DNN decoder.
Our encoder provides a localized learning ability and uses convolution layers to avoid overfitting and improve information propagation rate from the boundaries to the interior of the domain.
arXiv Detail & Related papers (2024-03-22T23:56:40Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - Rate Distortion Characteristic Modeling for Neural Image Compression [59.25700168404325]
End-to-end optimization capability offers neural image compression (NIC) superior lossy compression performance.
distinct models are required to be trained to reach different points in the rate-distortion (R-D) space.
We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep network and statistical modeling.
arXiv Detail & Related papers (2021-06-24T12:23:05Z) - Variational Autoencoders: A Harmonic Perspective [79.49579654743341]
We study Variational Autoencoders (VAEs) from the perspective of harmonic analysis.
We show that the encoder variance of a VAE controls the frequency content of the functions parameterised by the VAE encoder and decoder neural networks.
arXiv Detail & Related papers (2021-05-31T10:39:25Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.