Photonic co-processors in HPC: using LightOn OPUs for Randomized
Numerical Linear Algebra
- URL: http://arxiv.org/abs/2104.14429v1
- Date: Thu, 29 Apr 2021 15:48:52 GMT
- Title: Photonic co-processors in HPC: using LightOn OPUs for Randomized
Numerical Linear Algebra
- Authors: Daniel Hesslow, Alessandro Cappelli, Igor Carron, Laurent Daudet,
Rapha\"el Lafargue, Kilian M\"uller, Ruben Ohana, Gustave Pariente, and
Iacopo Poli
- Abstract summary: We show that the randomization step for dimensionality reduction may itself become the computational bottleneck on traditional hardware.
We show that randomization can be significantly accelerated, at negligible precision loss, in a wide range of important RandNLA algorithms.
- Score: 53.13961454500934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Randomized Numerical Linear Algebra (RandNLA) is a powerful class of methods,
widely used in High Performance Computing (HPC). RandNLA provides approximate
solutions to linear algebra functions applied to large signals, at reduced
computational costs. However, the randomization step for dimensionality
reduction may itself become the computational bottleneck on traditional
hardware. Leveraging near constant-time linear random projections delivered by
LightOn Optical Processing Units we show that randomization can be
significantly accelerated, at negligible precision loss, in a wide range of
important RandNLA algorithms, such as RandSVD or trace estimators.
Related papers
- Recent and Upcoming Developments in Randomized Numerical Linear Algebra for Machine Learning [49.0767291348921]
Randomized Numerical Linear Algebra (RandNLA) is an area which uses randomness to develop improved algorithms for ubiquitous matrix problems.
This article provides a self-contained overview of RandNLA, in light of these developments.
arXiv Detail & Related papers (2024-06-17T02:30:55Z) - Random Fourier Signature Features [8.766411351797885]
algebras give rise to one of the most powerful measures of similarity for sequences of arbitrary length called the signature kernel.
Previous algorithms to compute the signature kernel scale quadratically in terms of the length and the number of the sequences.
We develop a random Fourier feature-based acceleration of the signature kernel acting on the inherently non-Euclidean domain of sequences.
arXiv Detail & Related papers (2023-11-20T22:08:17Z) - Randomized Polar Codes for Anytime Distributed Machine Learning [66.46612460837147]
We present a novel distributed computing framework that is robust to slow compute nodes, and is capable of both approximate and exact computation of linear operations.
We propose a sequential decoding algorithm designed to handle real valued data while maintaining low computational complexity for recovery.
We demonstrate the potential applications of this framework in various contexts, such as large-scale matrix multiplication and black-box optimization.
arXiv Detail & Related papers (2023-09-01T18:02:04Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Quadratic speedup for simulating Gaussian boson sampling [0.9236074230806577]
We introduce an algorithm for the classical simulation of Gaussian boson sampling that is quadratically faster than previously known methods.
The complexity of the algorithm is exponential in the number of photon pairs detected, not the number of photons.
We show that an improved loop hafnian algorithm can be used to compute pure-state probabilities without the need of a supercomputer.
arXiv Detail & Related papers (2020-10-29T13:53:30Z) - Determinantal Point Processes in Randomized Numerical Linear Algebra [80.27102478796613]
Numerical Linear Algebra (RandNLA) uses randomness to develop improved algorithms for matrix problems that arise in scientific computing, data science, machine learning, etc.
Recent work has uncovered deep and fruitful connections between DPPs and RandNLA which lead to new guarantees and improved algorithms.
arXiv Detail & Related papers (2020-05-07T00:39:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.