Efficiently Computing Sparse Fourier Transforms of $q$-ary Functions
- URL: http://arxiv.org/abs/2301.06200v1
- Date: Sun, 15 Jan 2023 22:04:53 GMT
- Title: Efficiently Computing Sparse Fourier Transforms of $q$-ary Functions
- Authors: Yigit Efe Erginbas, Justin Singh Kang, Amirali Aghazadeh, Kannan
Ramchandran
- Abstract summary: We develop a sparse Fourier transform algorithm specifically for $q$-ary functions of length $n$ sequences.
We show that for fixed $q$, a robust version of $q$-SFT has a sample complexity of $O(Sn2)$ and a computational complexity of $O(Sn3)$ with the same guarantees.
- Score: 12.522202946750157
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fourier transformations of pseudo-Boolean functions are popular tools for
analyzing functions of binary sequences. Real-world functions often have
structures that manifest in a sparse Fourier transform, and previous works have
shown that under the assumption of sparsity the transform can be computed
efficiently. But what if we want to compute the Fourier transform of functions
defined over a $q$-ary alphabet? These types of functions arise naturally in
many areas including biology. A typical workaround is to encode the $q$-ary
sequence in binary, however, this approach is computationally inefficient and
fundamentally incompatible with the existing sparse Fourier transform
techniques. Herein, we develop a sparse Fourier transform algorithm
specifically for $q$-ary functions of length $n$ sequences, dubbed $q$-SFT,
which provably computes an $S$-sparse transform with vanishing error as $q^n
\rightarrow \infty$ in $O(Sn)$ function evaluations and $O(S n^2 \log q)$
computations, where $S = q^{n\delta}$ for some $\delta < 1$. Under certain
assumptions, we show that for fixed $q$, a robust version of $q$-SFT has a
sample complexity of $O(Sn^2)$ and a computational complexity of $O(Sn^3)$ with
the same asymptotic guarantees. We present numerical simulations on synthetic
and real-world RNA data, demonstrating the scalability of $q$-SFT to massively
high dimensional $q$-ary functions.
Related papers
- Approximation Rates for Shallow ReLU$^k$ Neural Networks on Sobolev Spaces via the Radon Transform [4.096453902709292]
We consider the problem of how efficiently shallow neural networks with the ReLU$k$ activation function can approximate functions from Sobolev spaces.
We provide a simple proof of nearly optimal approximation rates in a variety of cases, including when $qleq p$, $pgeq 2$, and $s leq k + (d+1)/2$.
arXiv Detail & Related papers (2024-08-20T16:43:45Z) - Fast Fourier transforms and fast Wigner and Weyl functions in large quantum systems [0.0]
Two methods for fast Fourier transforms are used in a quantum context.
The first method is for systems with dimension of the Hilbert space $D=dn$ with $d$ an odd integer.
The second method is also used for the fast calculation of Wigner and Weyl functions, in quantum systems with large finite dimension of the Hilbert space.
arXiv Detail & Related papers (2024-05-08T15:54:35Z) - Learning to Understand: Identifying Interactions via the Möbius Transform [18.987216240237483]
We use the M"obius transform to find interpretable representations of learned functions.
A robust version of this algorithm withstands noise and maintains this complexity.
In several examples, we observe that representations generated via the M"obius transform are up to twice as faithful to the original function.
arXiv Detail & Related papers (2024-02-04T22:47:34Z) - Fast $(1+\varepsilon)$-Approximation Algorithms for Binary Matrix
Factorization [54.29685789885059]
We introduce efficient $(1+varepsilon)$-approximation algorithms for the binary matrix factorization (BMF) problem.
The goal is to approximate $mathbfA$ as a product of low-rank factors.
Our techniques generalize to other common variants of the BMF problem.
arXiv Detail & Related papers (2023-06-02T18:55:27Z) - Approximate Function Evaluation via Multi-Armed Bandits [51.146684847667125]
We study the problem of estimating the value of a known smooth function $f$ at an unknown point $boldsymbolmu in mathbbRn$, where each component $mu_i$ can be sampled via a noisy oracle.
We design an instance-adaptive algorithm that learns to sample according to the importance of each coordinate, and with probability at least $1-delta$ returns an $epsilon$ accurate estimate of $f(boldsymbolmu)$.
arXiv Detail & Related papers (2022-03-18T18:50:52Z) - Random matrices in service of ML footprint: ternary random features with
no performance loss [55.30329197651178]
We show that the eigenspectrum of $bf K$ is independent of the distribution of the i.i.d. entries of $bf w$.
We propose a novel random technique, called Ternary Random Feature (TRF)
The computation of the proposed random features requires no multiplication and a factor of $b$ less bits for storage compared to classical random features.
arXiv Detail & Related papers (2021-10-05T09:33:49Z) - Learning Set Functions that are Sparse in Non-Orthogonal Fourier Bases [73.53227696624306]
We present a new family of algorithms for learning Fourier-sparse set functions.
In contrast to other work that focused on the Walsh-Hadamard transform, our novel algorithms operate with recently introduced non-orthogonal Fourier transforms.
We demonstrate effectiveness on several real-world applications.
arXiv Detail & Related papers (2020-10-01T14:31:59Z) - Convergence of Sparse Variational Inference in Gaussian Processes
Regression [29.636483122130027]
We show that a method with an overall computational cost of $mathcalO(log N)2D(loglog N)2)$ can be used to perform inference.
arXiv Detail & Related papers (2020-08-01T19:23:34Z) - Sample Efficient Reinforcement Learning via Low-Rank Matrix Estimation [30.137884459159107]
We consider the question of learning $Q$-function in a sample efficient manner for reinforcement learning with continuous state and action spaces.
We develop a simple, iterative learning algorithm that finds $epsilon$-Schmidt $Q$-function with sample complexity of $widetildeO(frac1epsilonmax(d1), d_2)+2)$ when the optimal $Q$-function has low rank $r$ and the factor $$ is below a certain threshold.
arXiv Detail & Related papers (2020-06-11T00:55:35Z) - A Randomized Algorithm to Reduce the Support of Discrete Measures [79.55586575988292]
Given a discrete probability measure supported on $N$ atoms and a set of $n$ real-valued functions, there exists a probability measure that is supported on a subset of $n+1$ of the original $N$ atoms.
We give a simple geometric characterization of barycenters via negative cones and derive a randomized algorithm that computes this new measure by "greedy geometric sampling"
We then study its properties, and benchmark it on synthetic and real-world data to show that it can be very beneficial in the $Ngg n$ regime.
arXiv Detail & Related papers (2020-06-02T16:38:36Z) - On the Modularity of Hypernetworks [103.1147622394852]
We show that for a structured target function, the overall number of trainable parameters in a hypernetwork is smaller by orders of magnitude than the number of trainable parameters of a standard neural network and an embedding method.
arXiv Detail & Related papers (2020-02-23T22:51:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.