Transform Once: Efficient Operator Learning in Frequency Domain
- URL: http://arxiv.org/abs/2211.14453v1
- Date: Sat, 26 Nov 2022 01:56:05 GMT
- Title: Transform Once: Efficient Operator Learning in Frequency Domain
- Authors: Michael Poli, Stefano Massaroli, Federico Berto, Jinykoo Park, Tri
Dao, Christopher R\'e, Stefano Ermon
- Abstract summary: We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
- Score: 69.74509540521397
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spectral analysis provides one of the most effective paradigms for
information-preserving dimensionality reduction, as simple descriptions of
naturally occurring signals are often obtained via few terms of periodic basis
functions. In this work, we study deep neural networks designed to harness the
structure in frequency domain for efficient learning of long-range correlations
in space or time: frequency-domain models (FDMs). Existing FDMs are based on
complex-valued transforms i.e. Fourier Transforms (FT), and layers that perform
computation on the spectrum and input data separately. This design introduces
considerable computational overhead: for each layer, a forward and inverse FT.
Instead, this work introduces a blueprint for frequency domain learning through
a single transform: transform once (T1). To enable efficient, direct learning
in the frequency domain we derive a variance-preserving weight initialization
scheme and investigate methods for frequency selection in reduced-order FDMs.
Our results noticeably streamline the design process of FDMs, pruning redundant
transforms, and leading to speedups of 3x to 10x that increase with data
resolution and model size. We perform extensive experiments on learning the
solution operator of spatio-temporal dynamics, including incompressible
Navier-Stokes, turbulent flows around airfoils and high-resolution video of
smoke. T1 models improve on the test performance of FDMs while requiring
significantly less computation (5 hours instead of 32 for our large-scale
experiment), with over 20% reduction in average predictive error across tasks.
Related papers
- Adaptive Random Fourier Features Training Stabilized By Resampling With Applications in Image Regression [0.8947831206263182]
We present an enhanced adaptive random Fourier features (ARFF) training algorithm for shallow neural networks.
This method uses a particle filter type resampling technique to stabilize the training process and reduce sensitivity to parameter choices.
arXiv Detail & Related papers (2024-10-08T22:08:03Z) - Neural Fourier Modelling: A Highly Compact Approach to Time-Series Analysis [9.969451740838418]
We introduce Neural Fourier Modelling (NFM), a compact yet powerful solution for time-series analysis.
NFM is grounded in two key properties of the Fourier transform (FT): (i) the ability to model finite-length time series as functions in the Fourier domain, and (ii) the capacity for data manipulation within the Fourier domain.
NFM achieves state-of-the-art performance on a wide range of tasks, including challenging time-series scenarios with previously unseen sampling rates at test time.
arXiv Detail & Related papers (2024-10-07T02:39:55Z) - Frequency-Aware Deepfake Detection: Improving Generalizability through
Frequency Space Learning [81.98675881423131]
This research addresses the challenge of developing a universal deepfake detector that can effectively identify unseen deepfake images.
Existing frequency-based paradigms have relied on frequency-level artifacts introduced during the up-sampling in GAN pipelines to detect forgeries.
We introduce a novel frequency-aware approach called FreqNet, centered around frequency domain learning, specifically designed to enhance the generalizability of deepfake detectors.
arXiv Detail & Related papers (2024-03-12T01:28:00Z) - Incremental Spatial and Spectral Learning of Neural Operators for
Solving Large-Scale PDEs [86.35471039808023]
We introduce the Incremental Fourier Neural Operator (iFNO), which progressively increases the number of frequency modes used by the model.
We show that iFNO reduces total training time while maintaining or improving generalization performance across various datasets.
Our method demonstrates a 10% lower testing error, using 20% fewer frequency modes compared to the existing Fourier Neural Operator, while also achieving a 30% faster training.
arXiv Detail & Related papers (2022-11-28T09:57:15Z) - Solving Seismic Wave Equations on Variable Velocity Models with Fourier
Neural Operator [3.2307366446033945]
We propose a new framework paralleled Fourier neural operator (PFNO) for efficiently training the FNO-based solver.
Numerical experiments demonstrate the high accuracy of both FNO and PFNO with complicated velocity models.
PFNO admits higher computational efficiency on large-scale testing datasets, compared with the traditional finite-difference method.
arXiv Detail & Related papers (2022-09-25T22:25:57Z) - FAMLP: A Frequency-Aware MLP-Like Architecture For Domain Generalization [73.41395947275473]
We propose a novel frequency-aware architecture, in which the domain-specific features are filtered out in the transformed frequency domain.
Experiments on three benchmarks demonstrate significant performance, outperforming the state-of-the-art methods by a margin of 3%, 4% and 9%, respectively.
arXiv Detail & Related papers (2022-03-24T07:26:29Z) - Deep Frequency Filtering for Domain Generalization [55.66498461438285]
Deep Neural Networks (DNNs) have preferences for some frequency components in the learning process.
We propose Deep Frequency Filtering (DFF) for learning domain-generalizable features.
We show that applying our proposed DFF on a plain baseline outperforms the state-of-the-art methods on different domain generalization tasks.
arXiv Detail & Related papers (2022-03-23T05:19:06Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.