A parameter study for LLL and BKZ with application to shortest vector problems
- URL: http://arxiv.org/abs/2502.05160v1
- Date: Fri, 07 Feb 2025 18:41:44 GMT
- Title: A parameter study for LLL and BKZ with application to shortest vector problems
- Authors: Tobias Köppl, René Zander, Louis Henkel, Nikolay Tcholtchev,
- Abstract summary: Two well-known algorithms that can be used to simplify a given SVP are the Lenstra-Lenstra-Lov'asz (LLL) algorithm and the Block Korkine-Zolotarev (BKZ) algorithm.
We study the performance of both algorithms for SVPs with different sizes and modular rings.
- Score: 0.20971479389679337
- License:
- Abstract: In this work, we study the solution of shortest vector problems (SVPs) arising in terms of learning with error problems (LWEs). LWEs are linear systems of equations over a modular ring, where a perturbation vector is added to the right-hand side. This type of problem is of great interest, since LWEs have to be solved in order to be able to break lattice-based cryptosystems as the Module-Lattice-Based Key-Encapsulation Mechanism published by NIST in 2024. Due to this fact, several classical and quantum-based algorithms have been studied to solve SVPs. Two well-known algorithms that can be used to simplify a given SVP are the Lenstra-Lenstra-Lov\'asz (LLL) algorithm and the Block Korkine-Zolotarev (BKZ) algorithm. LLL and BKZ construct bases that can be used to compute or approximate solutions of the SVP. We study the performance of both algorithms for SVPs with different sizes and modular rings. Thereby, application of LLL or BKZ to a given SVP is considered to be successful if they produce bases containing a solution vector of the SVP.
Related papers
- Simple and Provable Scaling Laws for the Test-Time Compute of Large Language Models [70.07661254213181]
We propose two principled algorithms for the test-time compute of large language models.
We prove theoretically that the failure probability of one algorithm decays to zero exponentially as its test-time compute grows.
arXiv Detail & Related papers (2024-11-29T05:29:47Z) - A quantum-classical hybrid algorithm with Ising model for the learning with errors problem [13.06030390635216]
We propose a quantum-classical hybrid algorithm with Ising model (HAWI) to address the Learning-With-Errors (LWE) problem.
We identify the low-energy levels of the Hamiltonian to extract the solution, making it suitable for implementation on current noisy intermediate-scale quantum (NISQ) devices.
Our algorithm is iterations, and its time complexity depends on the specific quantum algorithm employed to find the Hamiltonian's low-energy levels.
arXiv Detail & Related papers (2024-08-15T05:11:35Z) - An Efficient Quantum Algorithm for Linear System Problem in Tensor Format [4.264200809234798]
We propose a quantum algorithm based on the recent advances on adiabatic-inspired QLSA.
We rigorously show that the total complexity of our implementation is polylogarithmic in the dimension.
arXiv Detail & Related papers (2024-03-28T20:37:32Z) - Deep Learning Assisted Multiuser MIMO Load Modulated Systems for
Enhanced Downlink mmWave Communications [68.96633803796003]
This paper is focused on multiuser load modulation arrays (MU-LMAs) which are attractive due to their low system complexity and reduced cost for millimeter wave (mmWave) multi-input multi-output (MIMO) systems.
The existing precoding algorithm for downlink MU-LMA relies on a sub-array structured (SAS) transmitter which may suffer from decreased degrees of freedom and complex system configuration.
In this paper, we conceive an MU-LMA system employing a full-array structured (FAS) transmitter and propose two algorithms accordingly.
arXiv Detail & Related papers (2023-11-08T08:54:56Z) - Online Learning Quantum States with the Logarithmic Loss via VB-FTRL [1.8856444568755568]
Online learning of quantum states with the logarithmic loss (LL-OLQS) is a classic open problem in online learning for over three decades.
In this paper, we generalize VB-FTRL for LL-OLQS with moderate computational complexity.
Each of the algorithm consists of a semidefinite program that can be implemented in time by, for example, cutting-plane methods.
arXiv Detail & Related papers (2023-11-06T15:45:33Z) - AMS-Net: Adaptive Multiscale Sparse Neural Network with Interpretable
Basis Expansion for Multiphase Flow Problems [8.991619150027267]
We propose an adaptive sparse learning algorithm that can be applied to learn the physical processes and obtain a sparse representation of the solution given a large snapshot space.
The information of the basis functions are incorporated in the loss function, which minimizes the differences between the downscaled reduced order solutions and reference solutions at multiple time steps.
More numerical tests are performed on two-phase multiscale flow problems to show the capability and interpretability of the proposed method on complicated applications.
arXiv Detail & Related papers (2022-07-24T13:12:43Z) - Optimization-based Block Coordinate Gradient Coding for Mitigating
Partial Stragglers in Distributed Learning [58.91954425047425]
This paper aims to design a new gradient coding scheme for mitigating partial stragglers in distributed learning.
We propose a gradient coordinate coding scheme with L coding parameters representing L possibly different diversities for the L coordinates, which generates most gradient coding schemes.
arXiv Detail & Related papers (2022-06-06T09:25:40Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Hybrid algorithms to solve linear systems of equations with limited
qubit resources [7.111403318486868]
The complexity using classical methods increases linearly with the size of equations.
The HHL algorithm proposed by Harrow et al. achieves exponential acceleration compared with the best classical algorithm.
Three hybrid iterative phase estimation algorithms (HIPEA) are designed based on the iterative phase estimation algorithm in this paper.
arXiv Detail & Related papers (2021-06-29T15:10:55Z) - Covariance-Free Sparse Bayesian Learning [62.24008859844098]
We introduce a new SBL inference algorithm that avoids explicit inversions of the covariance matrix.
Our method can be up to thousands of times faster than existing baselines.
We showcase how our new algorithm enables SBL to tractably tackle high-dimensional signal recovery problems.
arXiv Detail & Related papers (2021-05-21T16:20:07Z) - Sublinear Least-Squares Value Iteration via Locality Sensitive Hashing [49.73889315176884]
We present the first provable Least-Squares Value Iteration (LSVI) algorithms that have runtime complexity sublinear in the number of actions.
We build the connections between the theory of approximate maximum inner product search and the regret analysis of reinforcement learning.
arXiv Detail & Related papers (2021-05-18T05:23:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.