Constructing and Machine Learning Calabi-Yau Five-folds
- URL: http://arxiv.org/abs/2310.15966v2
- Date: Tue, 9 Jan 2024 19:00:00 GMT
- Title: Constructing and Machine Learning Calabi-Yau Five-folds
- Authors: R. Alawadhi, D. Angella, A. Leonardo and T. Schettini Gherardini
- Abstract summary: Supervised machine learning is performed on the cohomological data.
We construct all possible complete intersection Calabi-Yau five-folds in a product of four or less complex projective spaces.
We find that $h1,1$ can be learnt very efficiently, with very high $R2$ score and an accuracy of $96%$, i.e. $96 %$ of the predictions exactly match the correct values.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We construct all possible complete intersection Calabi-Yau five-folds in a
product of four or less complex projective spaces, with up to four constraints.
We obtain $27068$ spaces, which are not related by permutations of rows and
columns of the configuration matrix, and determine the Euler number for all of
them. Excluding the $3909$ product manifolds among those, we calculate the
cohomological data for $12433$ cases, i.e. $53.7 \%$ of the non-product spaces,
obtaining $2375$ different Hodge diamonds. The dataset containing all the above
information is available at
https://www.dropbox.com/scl/fo/z7ii5idt6qxu36e0b8azq/h?rlkey=0qfhx3tykytduobpld510gsfy&dl=0
. The distributions of the invariants are presented, and a comparison with the
lower-dimensional analogues is discussed. Supervised machine learning is
performed on the cohomological data, via classifier and regressor (both fully
connected and convolutional) neural networks. We find that $h^{1,1}$ can be
learnt very efficiently, with very high $R^2$ score and an accuracy of $96\%$,
i.e. $96 \%$ of the predictions exactly match the correct values. For
$h^{1,4},h^{2,3}, \eta$, we also find very high $R^2$ scores, but the accuracy
is lower, due to the large ranges of possible values.
Related papers
- A Theory of Interpretable Approximations [61.90216959710842]
We study the idea of approximating a target concept $c$ by a small aggregation of concepts from some base class $mathcalH$.
For any given pair of $mathcalH$ and $c$, exactly one of these cases holds: (i) $c$ cannot be approximated by $mathcalH$ with arbitrary accuracy.
We show that, in the case of interpretable approximations, even a slightly nontrivial a-priori guarantee on the complexity of approximations implies approximations with constant (distribution-free and accuracy-
arXiv Detail & Related papers (2024-06-15T06:43:45Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Near-Optimal Bounds for Learning Gaussian Halfspaces with Random
Classification Noise [50.64137465792738]
We show that any efficient SQ algorithm for the problem requires sample complexity at least $Omega(d1/2/(maxp, epsilon)2)$.
Our lower bound suggests that this quadratic dependence on $1/epsilon$ is inherent for efficient algorithms.
arXiv Detail & Related papers (2023-07-13T18:59:28Z) - Deep multi-task mining Calabi-Yau four-folds [6.805575417034372]
We consider the dataset of all Calabi-Yau four-folds constructed as complete intersections in products of projective spaces.
With 30% (80%) training ratio, we reach an accuracy of 100% for $h (1,1)$ and 97% for $h (2,1)$.
arXiv Detail & Related papers (2021-08-04T18:00:15Z) - Learning elliptic partial differential equations with randomized linear
algebra [2.538209532048867]
We show that one can construct an approximant to $G$ that converges almost surely.
The quantity $0Gamma_epsilonleq 1$ characterizes the quality of the training dataset.
arXiv Detail & Related papers (2021-01-31T16:57:59Z) - Hardness of Learning Halfspaces with Massart Noise [56.98280399449707]
We study the complexity of PAC learning halfspaces in the presence of Massart (bounded) noise.
We show that there an exponential gap between the information-theoretically optimal error and the best error that can be achieved by a SQ algorithm.
arXiv Detail & Related papers (2020-12-17T16:43:11Z) - Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models [56.98280399449707]
We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
arXiv Detail & Related papers (2020-12-14T18:14:08Z) - An Algorithm for Learning Smaller Representations of Models With Scarce
Data [0.0]
We present a greedy algorithm for solving binary classification problems in situations where the dataset is too small or not fully representative.
It relies on a trained model with loose accuracy constraints, an iterative hyperparameter pruning procedure, and a function used to generate new data.
arXiv Detail & Related papers (2020-10-15T19:17:51Z) - Machine Learning Calabi-Yau Four-folds [0.0]
Hodge numbers depend non-trivially on the underlying manifold data.
We study supervised learning of the Hodge numbers h1,1 and h3,1 for Calabi-Yau four-folds.
arXiv Detail & Related papers (2020-09-05T14:54:25Z) - Linear Time Sinkhorn Divergences using Positive Features [51.50788603386766]
Solving optimal transport with an entropic regularization requires computing a $ntimes n$ kernel matrix that is repeatedly applied to a vector.
We propose to use instead ground costs of the form $c(x,y)=-logdotpvarphi(x)varphi(y)$ where $varphi$ is a map from the ground space onto the positive orthant $RRr_+$, with $rll n$.
arXiv Detail & Related papers (2020-06-12T10:21:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.