A Learning-Based Approach to Approximate Coded Computation
- URL: http://arxiv.org/abs/2205.09818v1
- Date: Thu, 19 May 2022 19:43:53 GMT
- Title: A Learning-Based Approach to Approximate Coded Computation
- Authors: Navneet Agrawal, Yuqin Qiu, Matthias Frey, Igor Bjelakovic, Setareh
Maghsudi, Slawomir Stanczak, Jingge Zhu
- Abstract summary: We propose AICC, an AI-aided learning approach that is inspired by LCC but also uses deep neural networks (DNNs)
It is appropriate for coded computation of more general functions.
- Score: 22.886991943938703
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lagrange coded computation (LCC) is essential to solving problems about
matrix polynomials in a coded distributed fashion; nevertheless, it can only
solve the problems that are representable as matrix polynomials. In this paper,
we propose AICC, an AI-aided learning approach that is inspired by LCC but also
uses deep neural networks (DNNs). It is appropriate for coded computation of
more general functions. Numerical simulations demonstrate the suitability of
the proposed approach for the coded computation of different matrix functions
that are often utilized in digital signal processing.
Related papers
- Block encoding of sparse matrices with a periodic diagonal structure [67.45502291821956]
We provide an explicit quantum circuit for block encoding a sparse matrix with a periodic diagonal structure.<n>Various applications for the presented methodology are discussed in the context of solving differential problems.
arXiv Detail & Related papers (2026-02-11T07:24:33Z) - Practical block encodings of matrix polynomials that can also be trivially controlled [0.01918316416632161]
We present practical and explicit block encoding circuits implementing matrix transformations of a target matrix.<n>With standard approaches, block-encoding a degree-$d$ matrix requires a circuit depth scaling as $d$ times the depth for block-encoding the original matrix alone.<n>We show that the additional overhead required for encoding matrix-depth circuits can be reduced to scale linearly in $d$ with no dependence on system size or the cost of block encoding the original matrix.
arXiv Detail & Related papers (2026-01-26T18:37:44Z) - MatRL: Provably Generalizable Iterative Algorithm Discovery via Monte-Carlo Tree Search [37.24058519921229]
MatRL is a reinforcement learning framework that automatically discovers iterative algorithms for computing matrix functions.<n>We show that MatRL produces algorithms that outperform various baselines in the literature.
arXiv Detail & Related papers (2025-07-04T22:57:33Z) - Efficient Quantum Access Model for Sparse Structured Matrices using Linear Combination of Things [0.6138671548064355]
We present a novel framework for Linear Combination of Unitaries (LCU)-style decomposition tailored to structured sparse matrices.<n>LCU is a foundational primitive in both variational and fault-tolerant quantum algorithms.<n>We introduce the Sigma basis, a compact set of simple, non-unitary operators that can better capture sparsity and structure.
arXiv Detail & Related papers (2025-07-04T17:05:07Z) - Efficient Variational Quantum Linear Solver for Structured Sparse Matrices [0.6138671548064355]
We show that by using an alternate basis one can better exploit the sparsity and underlying structure of matrix.
We employ the concept of unitary completion to design efficient quantum circuits for computing the global/local VQLS cost functions.
arXiv Detail & Related papers (2024-04-25T19:22:05Z) - Polynomial Precision Dependence Solutions to Alignment Research Center
Matrix Completion Problems [3.796436257221663]
The motivation for these problems is to enable efficient dependence on $varepsilon$.
Our solutions involve reframing the matrix completion problems as a semidefinite program (SDP)
arXiv Detail & Related papers (2024-01-08T16:25:45Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - CoLA: Exploiting Compositional Structure for Automatic and Efficient
Numerical Linear Algebra [62.37017125812101]
We propose a simple but general framework for large-scale linear algebra problems in machine learning, named CoLA.
By combining a linear operator abstraction with compositional dispatch rules, CoLA automatically constructs memory and runtime efficient numerical algorithms.
We showcase its efficacy across a broad range of applications, including partial differential equations, Gaussian processes, equivariant model construction, and unsupervised learning.
arXiv Detail & Related papers (2023-09-06T14:59:38Z) - Multi-Objective Policy Gradients with Topological Constraints [108.10241442630289]
We present a new algorithm for a policy gradient in TMDPs by a simple extension of the proximal policy optimization (PPO) algorithm.
We demonstrate this on a real-world multiple-objective navigation problem with an arbitrary ordering of objectives both in simulation and on a real robot.
arXiv Detail & Related papers (2022-09-15T07:22:58Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - An induction proof of the backpropagation algorithm in matrix notation [0.0]
Backpropagation (BP) is an algorithm that exploits the computational architecture of neural networks.
This work provides a full induction proof of the BP algorithm in matrix notation.
arXiv Detail & Related papers (2021-07-20T10:02:17Z) - Matrix Encoding Networks for Neural Combinatorial Optimization [5.524431376262751]
A popular approach is to use a neural net to compute on the parameters of a given optimization (CO) problem.
There is currently no neural net model that takes in such matrix-style relationship data as an input.
In this paper, we show how conveniently it takes in and processes parameters of such complex CO problems.
arXiv Detail & Related papers (2021-06-21T13:50:23Z) - Photonic co-processors in HPC: using LightOn OPUs for Randomized
Numerical Linear Algebra [53.13961454500934]
We show that the randomization step for dimensionality reduction may itself become the computational bottleneck on traditional hardware.
We show that randomization can be significantly accelerated, at negligible precision loss, in a wide range of important RandNLA algorithms.
arXiv Detail & Related papers (2021-04-29T15:48:52Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.