Machine learning for moduli space of genus two curves and an application to post-quantum cryptography
- URL: http://arxiv.org/abs/2403.17250v1
- Date: Mon, 25 Mar 2024 22:52:50 GMT
- Title: Machine learning for moduli space of genus two curves and an application to post-quantum cryptography
- Authors: Elira Shaska, Tony Shaska,
- Abstract summary: We use machine learning to study the locus $mathcal L_n$ of genus two curves with $(n, n)$-split Jacobian.
Such curves are important in isogeny based cryptography.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We use machine learning to study the locus ${\mathcal L}_n$ of genus two curves with $(n, n)$-split Jacobian. More precisely we design a transformer model which given values for the Igusa invariants determines if the corresponding genus two curve is in the locus ${\mathcal L}_n$, for $n=2, 3, 5, 7$. Such curves are important in isogeny based cryptography. During this study we discover that there are no rational points ${\mathfrak p} \in {\mathcal L}_n$ with weighted moduli height $\leq 2$ in any of ${\mathcal L}_2$, ${\mathcal L}_3$, and ${\mathcal L}_5$. This extends on previous work of the authors to use machine learning methods to study the moduli space of genus 2 algebraic curves.
Related papers
- Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit [75.4661041626338]
We study the problem of gradient descent learning of a single-index target function $f_*(boldsymbolx) = textstylesigma_*left(langleboldsymbolx,boldsymbolthetarangleright)$ under isotropic Gaussian data.
We prove that a two-layer neural network optimized by an SGD-based algorithm learns $f_*$ of arbitrary link function with a sample and runtime complexity of $n asymp T asymp C(q) cdot d
arXiv Detail & Related papers (2024-06-03T17:56:58Z) - Transformer In-Context Learning for Categorical Data [51.23121284812406]
We extend research on understanding Transformers through the lens of in-context learning with functional data by considering categorical outcomes, nonlinear underlying models, and nonlinear attention.
We present what is believed to be the first real-world demonstration of this few-shot-learning methodology, using the ImageNet dataset.
arXiv Detail & Related papers (2024-05-27T15:03:21Z) - Provably learning a multi-head attention layer [55.2904547651831]
Multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models.
In this work, we initiate the study of provably learning a multi-head attention layer from random examples.
We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable.
arXiv Detail & Related papers (2024-02-06T15:39:09Z) - Families of costs with zero and nonnegative MTW tensor in optimal
transport [0.0]
We compute explicitly the MTW tensor for the optimal transport problem on $mathbbRn$ with a cost function of form $mathsfc$.
We analyze the $sinh$-type hyperbolic cost, providing examples of $mathsfc$-type functions and divergence.
arXiv Detail & Related papers (2024-01-01T20:33:27Z) - Learned Nonlinear Predictor for Critically Sampled 3D Point Cloud
Attribute Compression [24.001318485207207]
We study 3D point cloud compression via a decoder approach.
In this paper, we study predicting $f_l*$ at level $l+1$ given $f_l*$ $l$ and encoding of $G_l*$ for the $p=1$ case.
arXiv Detail & Related papers (2023-11-22T17:26:54Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - Learning Elastic Costs to Shape Monge Displacements [39.381326738705255]
Monge problem asks to find the most efficient way to map one distribution to the other.
elastic costs shape the textitdisplacements of Monge maps $T$.
We propose a numerical method to compute Monge maps that are provably optimal.
arXiv Detail & Related papers (2023-06-20T21:17:32Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Uncertainties in Quantum Measurements: A Quantum Tomography [52.77024349608834]
The observables associated with a quantum system $S$ form a non-commutative algebra $mathcal A_S$.
It is assumed that a density matrix $rho$ can be determined from the expectation values of observables.
Abelian algebras do not have inner automorphisms, so the measurement apparatus can determine mean values of observables.
arXiv Detail & Related papers (2021-12-14T16:29:53Z) - Model Selection with Near Optimal Rates for Reinforcement Learning with
General Model Classes [27.361399036211694]
We address the problem of model selection for the finite horizon episodic Reinforcement Learning (RL) problem.
In the model selection framework, instead of $mathcalP*$, we are given $M$ nested families of transition kernels.
We show that textttARL-GEN obtains a regret of $TildemathcalO(d_mathcalE*H2+sqrtd_mathcalE* mathbbM* H2 T)$
arXiv Detail & Related papers (2021-07-13T05:00:38Z) - Learners' languages [0.0]
Authors show that the fundamental elements of deep learning -- gradient descent and backpropagation -- can be conceptualized as a strong monoidal functor.
We show that a map $Ato B$ in $mathbfPara(mathbfSLens)$ has a natural interpretation in terms of dynamical systems.
arXiv Detail & Related papers (2021-03-01T18:34:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.