Content-Aware Frequency Encoding for Implicit Neural Representations with Fourier-Chebyshev Features
- URL: http://arxiv.org/abs/2603.01028v1
- Date: Sun, 01 Mar 2026 10:14:19 GMT
- Title: Content-Aware Frequency Encoding for Implicit Neural Representations with Fourier-Chebyshev Features
- Authors: Junbo Ke, Yangyang Xu, You-Wei Wen, Chao Wang,
- Abstract summary: Implicit Neural Representations (INRs) have emerged as a powerful paradigm for various signal processing tasks.<n>Their inherent spectral bias limits the ability to capture high-frequency details.<n>Existing methods partially mitigate this issue by using Fourier-based features, which usually rely on fixed frequency bases.
- Score: 14.632447227551864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit Neural Representations (INRs) have emerged as a powerful paradigm for various signal processing tasks, but their inherent spectral bias limits the ability to capture high-frequency details. Existing methods partially mitigate this issue by using Fourier-based features, which usually rely on fixed frequency bases. This forces multi-layer perceptrons (MLPs) to inefficiently compose the required frequencies, thereby constraining their representational capacity. To address this limitation, we propose Content-Aware Frequency Encoding (CAFE), which builds upon Fourier features through multiple parallel linear layers combined via a Hadamard product. CAFE can explicitly and efficiently synthesize a broader range of frequency bases, while the learned weights enable the selection of task-relevant frequencies. Furthermore, we extend this framework to CAFE+, which incorporates Chebyshev features as a complementary component to Fourier bases. This combination provides a stronger and more stable frequency representation. Extensive experiments across multiple benchmarks validate the effectiveness and efficiency of our approach, consistently achieving superior performance over existing methods. Our code is available at https://github.com/JunboKe0619/CAFE.
Related papers
- Fourier Basis Mapping: A Time-Frequency Learning Framework for Time Series Forecasting [25.304812011127257]
We introduce a novel method for integrating time-frequency features through Fourier basis expansion and mapping in the time-frequency space.<n>Our approach extracts explicit frequency features while preserving temporal characteristics.<n>The results are validated on diverse real-world datasets for both long-term and short-term forecasting tasks.
arXiv Detail & Related papers (2025-07-13T01:45:27Z) - SPECTRE: An FFT-Based Efficient Drop-In Replacement to Self-Attention for Long Contexts [2.200751835496112]
Long-context transformers face significant efficiency challenges due to the quadratic cost of self-attention.<n>We introduce SPECTRE, a method that replaces each attention head with a fast real FFT.<n>We extend this efficiency to autoregressive generation through our Prefix-FFT cache and enhance local feature representation with an optional wavelet module.
arXiv Detail & Related papers (2025-02-25T17:43:43Z) - Robustifying Fourier Features Embeddings for Implicit Neural Representations [25.725097757343367]
Implicit Neural Representations (INRs) employ neural networks to represent continuous functions by mapping coordinates to the corresponding values of the target function.<n>INRs face a challenge known as spectral bias when dealing with scenes containing varying frequencies.<n>We propose the use of multi-layer perceptrons (MLPs) without additive.
arXiv Detail & Related papers (2025-02-08T07:43:37Z) - FourierMamba: Fourier Learning Integration with State Space Models for Image Deraining [71.46369218331215]
Image deraining aims to remove rain streaks from rainy images and restore clear backgrounds.
We propose a new framework termed FourierMamba, which performs image deraining with Mamba in the Fourier space.
arXiv Detail & Related papers (2024-05-29T18:58:59Z) - Frequency-Aware Deepfake Detection: Improving Generalizability through
Frequency Space Learning [81.98675881423131]
This research addresses the challenge of developing a universal deepfake detector that can effectively identify unseen deepfake images.
Existing frequency-based paradigms have relied on frequency-level artifacts introduced during the up-sampling in GAN pipelines to detect forgeries.
We introduce a novel frequency-aware approach called FreqNet, centered around frequency domain learning, specifically designed to enhance the generalizability of deepfake detectors.
arXiv Detail & Related papers (2024-03-12T01:28:00Z) - FFPN: Fourier Feature Pyramid Network for Ultrasound Image Segmentation [15.011573950064424]
Ultrasound (US) image segmentation is an active research area that requires real-time and highly accurate analysis in many scenarios.
Existing approaches may suffer from inadequate contour encoding or fail to effectively leverage the encoded results.
In this paper, we introduce a novel Fourier-anchor-based DTS framework called Fourier Feature Pyramid Network (FFPN) to address the aforementioned issues.
arXiv Detail & Related papers (2023-08-26T07:28:09Z) - Adaptive Frequency Filters As Efficient Global Token Mixers [100.27957692579892]
We show that adaptive frequency filters can serve as efficient global token mixers.
We take AFF token mixers as primary neural operators to build a lightweight neural network, dubbed AFFNet.
arXiv Detail & Related papers (2023-07-26T07:42:28Z) - Neural Fourier Filter Bank [18.52741992605852]
We present a novel method to provide efficient and highly detailed reconstructions.
Inspired by wavelets, we learn a neural field that decompose the signal both spatially and frequency-wise.
arXiv Detail & Related papers (2022-12-04T03:45:08Z) - QFF: Quantized Fourier Features for Neural Field Representations [28.82293263445964]
We show that using Quantized Fourier Features (QFF) can result in smaller model size, faster training, and better quality outputs for several applications.
QFF are easy to code, fast to compute, and serve as a simple drop-in addition to many neural field representations.
arXiv Detail & Related papers (2022-12-02T00:11:22Z) - Incremental Spatial and Spectral Learning of Neural Operators for
Solving Large-Scale PDEs [86.35471039808023]
We introduce the Incremental Fourier Neural Operator (iFNO), which progressively increases the number of frequency modes used by the model.
We show that iFNO reduces total training time while maintaining or improving generalization performance across various datasets.
Our method demonstrates a 10% lower testing error, using 20% fewer frequency modes compared to the existing Fourier Neural Operator, while also achieving a 30% faster training.
arXiv Detail & Related papers (2022-11-28T09:57:15Z) - Deep Frequency Filtering for Domain Generalization [55.66498461438285]
Deep Neural Networks (DNNs) have preferences for some frequency components in the learning process.
We propose Deep Frequency Filtering (DFF) for learning domain-generalizable features.
We show that applying our proposed DFF on a plain baseline outperforms the state-of-the-art methods on different domain generalization tasks.
arXiv Detail & Related papers (2022-03-23T05:19:06Z) - Functional Regularization for Reinforcement Learning via Learned Fourier
Features [98.90474131452588]
We propose a simple architecture for deep reinforcement learning by embedding inputs into a learned Fourier basis.
We show that it improves the sample efficiency of both state-based and image-based RL.
arXiv Detail & Related papers (2021-12-06T18:59:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.