Word Linear Complexity of sequences and Local Inversion of maps over finite fields
- URL: http://arxiv.org/abs/2311.06574v1
- Date: Sat, 11 Nov 2023 14:06:23 GMT
- Title: Word Linear Complexity of sequences and Local Inversion of maps over finite fields
- Authors: Virendra Sule,
- Abstract summary: This paper develops the notion of vector valued sequences over finite fields $ff$ as an extension of Linear Complexity$ of their ensembles.
It shows that the matrix minimal can be used with iteratively generated vector valued by maps $F:ffn at a given $ffn$ in $ffn$ for solving the unique local inverse $x$ in $ffn$ when the sequence is periodic.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper develops the notion of \emph{Word Linear Complexity} ($WLC$) of vector valued sequences over finite fields $\ff$ as an extension of Linear Complexity ($LC$) of sequences and their ensembles. This notion of complexity extends the concept of the minimal polynomial of an ensemble (vector valued) sequence to that of a matrix minimal polynomial and shows that the matrix minimal polynomial can be used with iteratively generated vector valued sequences by maps $F:\ff^n\rightarrow\ff^n$ at a given $y$ in $\ff^n$ for solving the unique local inverse $x$ of the equation $y=F(x)$ when the sequence is periodic. The idea of solving a local inverse of a map in finite fields when the iterative sequence is periodic and its application to various problems of Cryptanalysis is developed in previous papers \cite{sule322, sule521, sule722,suleCAM22} using the well known notion of $LC$ of sequences. $LC$ is the degree of the associated minimal polynomial of the sequence. The generalization of $LC$ to $WLC$ considers vector valued (or word oriented) sequences such that the word oriented recurrence relation is obtained by matrix vector multiplication instead of scalar multiplication as considered in the definition of $LC$. Hence the associated minimal polynomial is matrix valued whose degree is called $WLC$. A condition is derived when a nontrivial matrix polynomial associated with the word oriented recurrence relation exists when the sequence is periodic. It is shown that when the matrix minimal polynomial exists $n(WLC)=LC$. Finally it is shown that the local inversion problem is solved using the matrix minimal polynomial when such a polynomail exists hence leads to a word oriented approach to local inversion.
Related papers
- The Communication Complexity of Approximating Matrix Rank [50.6867896228563]
We show that this problem has randomized communication complexity $Omega(frac1kcdot n2log|mathbbF|)$.
As an application, we obtain an $Omega(frac1kcdot n2log|mathbbF|)$ space lower bound for any streaming algorithm with $k$ passes.
arXiv Detail & Related papers (2024-10-26T06:21:42Z) - Faster Sampling from Log-Concave Densities over Polytopes via Efficient Linear Solvers [29.212403229351253]
We present a nearly-optimal implementation of this Markov chain with per-step complexity which is roughly the number of non-zero entries of $A$ while the number of Markov chain steps remains the same.
Key technical ingredients are 1) to show that the matrices that arise in this Dikin walk change slowly, 2) to deploy efficient linear solvers that can leverage this slow change to speed up matrix inversion by using information computed in previous steps, and 3) to speed up the computation of the determinantal term in the Metropolis filter step via a randomized Taylor series-based estimator.
arXiv Detail & Related papers (2024-09-06T14:49:43Z) - Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms [50.15964512954274]
We study the problem of residual error estimation for matrix and vector norms using a linear sketch.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
We also show an $Omega(k2/pn1-2/p)$ lower bound for the sparse recovery problem, which is tight up to a $mathrmpoly(log n)$ factor.
arXiv Detail & Related papers (2024-08-16T02:33:07Z) - Polynomial Complexity of Inversion of sequences and Local Inversion of Maps [0.0]
Paper defines and explores solution to the problem of emphInversion of a finite Sequence over the binary field.
The minimum number of variables (order) in Complexity of a fixed degree defining RRs is termed as the emphPolynomial of the sequence at that degree.
arXiv Detail & Related papers (2024-06-28T02:34:29Z) - Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit [75.4661041626338]
We study the problem of gradient descent learning of a single-index target function $f_*(boldsymbolx) = textstylesigma_*left(langleboldsymbolx,boldsymbolthetarangleright)$ under isotropic Gaussian data.
We prove that a two-layer neural network optimized by an SGD-based algorithm learns $f_*$ of arbitrary link function with a sample and runtime complexity of $n asymp T asymp C(q) cdot d
arXiv Detail & Related papers (2024-06-03T17:56:58Z) - Hardness of Low Rank Approximation of Entrywise Transformed Matrix
Products [9.661328973620531]
Inspired by fast algorithms in natural language processing, we study low rank approximation in the entrywise transformed setting.
We give novel reductions from the Strong Exponential Time Hypothesis (SETH) that rely on lower bounding the leverage scores of flat sparse vectors.
Since our low rank algorithms rely on matrix-vectors, our lower bounds extend to show that computing $f(UV)W$, for even a small matrix $W$, requires $Omega(n2-o(1))$ time.
arXiv Detail & Related papers (2023-11-03T14:56:24Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Average-Case Complexity of Tensor Decomposition for Low-Degree
Polynomials [93.59919600451487]
"Statistical-computational gaps" occur in many statistical inference tasks.
We consider a model for random order-3 decomposition where one component is slightly larger in norm than the rest.
We show that tensor entries can accurately estimate the largest component when $ll n3/2$ but fail to do so when $rgg n3/2$.
arXiv Detail & Related papers (2022-11-10T00:40:37Z) - Spectrum of inner-product kernel matrices in the polynomial regime and
multiple descent phenomenon in kernel ridge regression [3.997680012976965]
kernel matrix is well approximated by its degree-$ell$ approximation.
We show that the spectrum of the matrix converges in distribution to a Marchenko-Pastur law.
arXiv Detail & Related papers (2022-04-21T22:20:52Z) - Linear-Sample Learning of Low-Rank Distributions [56.59844655107251]
We show that learning $ktimes k$, rank-$r$, matrices to normalized $L_1$ distance requires $Omega(frackrepsilon2)$ samples.
We propose an algorithm that uses $cal O(frackrepsilon2log2fracepsilon)$ samples, a number linear in the high dimension, and nearly linear in the matrices, typically low, rank proofs.
arXiv Detail & Related papers (2020-09-30T19:10:32Z) - Learning sums of powers of low-degree polynomials in the non-degenerate
case [2.6109033135086777]
We give a learning algorithm for an arithmetic circuit model from a lower bound for the same model, provided certain non-degeneracy conditions hold.
Our algorithm is based on a scheme for obtaining a learning algorithm for an arithmetic circuit model from a lower bound for the same model.
arXiv Detail & Related papers (2020-04-15T06:18:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.