Quantization Algorithms for Random Fourier Features
- URL: http://arxiv.org/abs/2102.13079v1
- Date: Thu, 25 Feb 2021 18:51:39 GMT
- Title: Quantization Algorithms for Random Fourier Features
- Authors: Xiaoyun Li and Ping Li
- Abstract summary: The method of random Fourier features (RFF) has also become popular, for approximating the Gaussian kernel.
RFF applies a specific nonlinear transformation on the projected data from random projections.
In this paper, we focus on developing quantization algorithms for RFF.
- Score: 25.356048456005023
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The method of random projection (RP) is the standard technique in machine
learning and many other areas, for dimensionality reduction, approximate near
neighbor search, compressed sensing, etc. Basically, RP provides a simple and
effective scheme for approximating pairwise inner products and Euclidean
distances in massive data. Closely related to RP, the method of random Fourier
features (RFF) has also become popular, for approximating the Gaussian kernel.
RFF applies a specific nonlinear transformation on the projected data from
random projections. In practice, using the (nonlinear) Gaussian kernel often
leads to better performance than the linear kernel (inner product), partly due
to the tuning parameter $(\gamma)$ introduced in the Gaussian kernel. Recently,
there has been a surge of interest in studying properties of RFF.
After random projections, quantization is an important step for efficient
data storage, computation, and transmission. Quantization for RP has also been
extensive studied in the literature. In this paper, we focus on developing
quantization algorithms for RFF. The task is in a sense challenging due to the
tuning parameter $\gamma$ in the Gaussian kernel. For example, the quantizer
and the quantized data might be tied to each specific tuning parameter
$\gamma$. Our contribution begins with an interesting discovery, that the
marginal distribution of RFF is actually free of the Gaussian kernel parameter
$\gamma$. This small finding significantly simplifies the design of the
Lloyd-Max (LM) quantization scheme for RFF in that there would be only one LM
quantizer for RFF (regardless of $\gamma$). We also develop a variant named
LM$^2$-RFF quantizer, which in certain cases is more accurate. Experiments
confirm that the proposed quantization schemes perform well.
Related papers
- Gaussian Processes Sampling with Sparse Grids under Additive Schwarz Preconditioner [6.408773096179187]
We propose a scalable algorithm for sampling random realizations of the prior and posterior of GP models.
The proposed algorithm leverages inducing points approximation with sparse grids, as well as additive Schwarz preconditioners.
arXiv Detail & Related papers (2024-08-01T00:19:36Z) - Solving High Frequency and Multi-Scale PDEs with Gaussian Processes [18.190228010565367]
PINNs often struggle to solve high-frequency and multi-scale PDEs.
We resort to the Gaussian process (GP) framework to solve this problem.
We use Kronecker product properties and multilinear algebra to promote computational efficiency and scalability.
arXiv Detail & Related papers (2023-11-08T05:26:58Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Simplex Random Features [53.97976744884616]
We present Simplex Random Features (SimRFs), a new random feature (RF) mechanism for unbiased approximation of the softmax and Gaussian kernels.
We prove that SimRFs provide the smallest possible mean square error (MSE) on unbiased estimates of these kernels.
We show consistent gains provided by SimRFs in settings including pointwise kernel estimation, nonparametric classification and scalable Transformers.
arXiv Detail & Related papers (2023-01-31T18:53:39Z) - Neural Inference of Gaussian Processes for Time Series Data of Quasars [72.79083473275742]
We introduce a new model that enables it to describe quasar spectra completely.
We also introduce a new method of inference of Gaussian process parameters, which we call $textitNeural Inference$.
The combination of both the CDRW model and Neural Inference significantly outperforms the baseline DRW and MLE.
arXiv Detail & Related papers (2022-11-17T13:01:26Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - Gaussian Process Bandit Optimization with Few Batches [49.896920704012395]
We introduce a batch algorithm inspired by finite-arm bandit algorithms.
We show that it achieves the cumulative regret upper bound $Oast(sqrtTgamma_T)$ using $O(loglog T)$ batches within time horizon $T$.
In addition, we propose a modified version of our algorithm, and characterize how the regret is impacted by the number of batches.
arXiv Detail & Related papers (2021-10-15T00:54:04Z) - Sigma-Delta and Distributed Noise-Shaping Quantization Methods for
Random Fourier Features [73.25551965751603]
We prove that our quantized RFFs allow a high accuracy approximation of the underlying kernels.
We show that the quantized RFFs can be further compressed, yielding an excellent trade-off between memory use and accuracy.
We empirically show by testing the performance of our methods on several machine learning tasks that our method compares favorably to other state of the art quantization methods in this context.
arXiv Detail & Related papers (2021-06-04T17:24:47Z) - Gauss-Legendre Features for Gaussian Process Regression [7.37712470421917]
We present a Gauss-Legendre quadrature based approach for scaling up Gaussian process regression via a low rank approximation of the kernel matrix.
Our method is very much inspired by the well-known random Fourier features approach, which also builds low-rank approximations via numerical integration.
arXiv Detail & Related papers (2021-01-04T18:09:25Z) - Fourier Sparse Leverage Scores and Approximate Kernel Learning [29.104055676527913]
We prove new explicit upper bounds on the leverage scores of Fourier functions under both the Gaussian and Laplace measures.
Our bounds are motivated by two important applications in machine learning.
arXiv Detail & Related papers (2020-06-12T17:25:39Z) - Scaling up Kernel Ridge Regression via Locality Sensitive Hashing [6.704115928005158]
We introduce a weighted version of random binning features and show that the corresponding kernel function generates smooth Gaussian processes.
We show that our weighted random binning features provide a spectral approximation to the corresponding kernel matrix, leading to efficient algorithms for kernel ridge regression.
arXiv Detail & Related papers (2020-03-21T21:41:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.