Fast Partial Fourier Transform
- URL: http://arxiv.org/abs/2008.12559v1
- Date: Fri, 28 Aug 2020 10:01:49 GMT
- Title: Fast Partial Fourier Transform
- Authors: Yong-chan Park, Jun-Gi Jang, U Kang
- Abstract summary: Fast Fourier transform (FFT) is a widely used algorithm that computes the discrete Fourier transform in many machine learning applications.
Despite its pervasive use, all known FFT algorithms do not provide a fine-tuning option for the user to specify one's demand.
In this paper, we propose a fast Partial Fourier Transform (PFT), a careful modification of the Cooley-Tukey algorithm that enables one to specify an arbitrary consecutive range where the coefficients should be computed.
- Score: 28.36925669222461
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a time series vector, how can we efficiently compute a specified part
of Fourier coefficients? Fast Fourier transform (FFT) is a widely used
algorithm that computes the discrete Fourier transform in many machine learning
applications. Despite its pervasive use, all known FFT algorithms do not
provide a fine-tuning option for the user to specify one's demand, that is, the
output size (the number of Fourier coefficients to be computed) is
algorithmically determined by the input size. This matters because not every
application using FFT requires the whole spectrum of the frequency domain,
resulting in an inefficiency due to extra computation. In this paper, we
propose a fast Partial Fourier Transform (PFT), a careful modification of the
Cooley-Tukey algorithm that enables one to specify an arbitrary consecutive
range where the coefficients should be computed. We derive the asymptotic time
complexity of PFT with respect to input and output sizes, as well as its
numerical accuracy. Experimental results show that our algorithm outperforms
the state-of-the-art FFT algorithms, with an order of magnitude of speedup for
sufficiently small output sizes without sacrificing accuracy.
Related papers
- Quantum Inverse Fast Fourier Transform [0.0]
An algorithm for Quantum Inverse Fast Fourier Transform (QIFFT) is developed to work for quantum data.
We have included the complete formulation of QIFFT algorithm from the classical model and have included butterfly diagram.
arXiv Detail & Related papers (2024-09-12T12:27:47Z) - Fourier expansion in variational quantum algorithms [1.4732811715354455]
We focus on the class of variational circuits, where constant gates are Clifford gates and parameterized gates are generated by Pauli operators.
We give a classical algorithm that computes coefficients of all trigonometric monomials up to a degree $m$ in time bounded by $mathcalO(N2m)$.
arXiv Detail & Related papers (2023-04-07T18:00:01Z) - Fourier Continuation for Exact Derivative Computation in
Physics-Informed Neural Operators [53.087564562565774]
PINO is a machine learning architecture that has shown promising empirical results for learning partial differential equations.
We present an architecture that leverages Fourier continuation (FC) to apply the exact gradient method to PINO for nonperiodic problems.
arXiv Detail & Related papers (2022-11-29T06:37:54Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - Factorized Fourier Neural Operators [77.47313102926017]
The Factorized Fourier Neural Operator (F-FNO) is a learning-based method for simulating partial differential equations.
We show that our model maintains an error rate of 2% while still running an order of magnitude faster than a numerical solver.
arXiv Detail & Related papers (2021-11-27T03:34:13Z) - Stable, Fast and Accurate: Kernelized Attention with Relative Positional
Encoding [63.539333383965726]
We propose a novel way to accelerate attention calculation for Transformers with relative positional encoding (RPE)
Based upon the observation that relative positional encoding forms a Toeplitz matrix, we mathematically show that kernelized attention with RPE can be calculated efficiently using Fast Fourier Transform (FFT)
arXiv Detail & Related papers (2021-06-23T17:51:26Z) - The Fourier Loss Function [0.0]
This paper introduces a new loss function induced by the Fourier-based Metric.
We prove that the Fourier loss function is twice differentiable, and we provide the explicit formula for both its gradient and its Hessian matrix.
We apply our loss function to a multi-class classification task using MNIST, Fashion-MNIST, and CIFAR10 datasets.
arXiv Detail & Related papers (2021-02-05T03:19:44Z) - Learning Set Functions that are Sparse in Non-Orthogonal Fourier Bases [73.53227696624306]
We present a new family of algorithms for learning Fourier-sparse set functions.
In contrast to other work that focused on the Walsh-Hadamard transform, our novel algorithms operate with recently introduced non-orthogonal Fourier transforms.
We demonstrate effectiveness on several real-world applications.
arXiv Detail & Related papers (2020-10-01T14:31:59Z) - Acceleration of Convolutional Neural Network Using FFT-Based Split
Convolutions [11.031841470875571]
Convolutional neural networks (CNNs) have a large number of variables and hence suffer from a complexity problem for their implementation.
Recent studies on Fast Fourier Transform (FFT) based CNN aiming at simplifying the computations required for FFT.
In this paper, a new method for CNN processing in the FFT domain is proposed, which is based on input splitting.
arXiv Detail & Related papers (2020-03-27T20:16:57Z) - DFTpy: An efficient and object-oriented platform for orbital-free DFT
simulations [55.41644538483948]
In this work, we present DFTpy, an open source software implementing OFDFT written entirely in Python 3.
We showcase the electronic structure of a million-atom system of aluminum metal which was computed on a single CPU.
DFTpy is released under the MIT license.
arXiv Detail & Related papers (2020-02-07T19:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.