Machine Learning Calabi-Yau Four-folds
- URL: http://arxiv.org/abs/2009.02544v2
- Date: Mon, 14 Sep 2020 11:11:55 GMT
- Title: Machine Learning Calabi-Yau Four-folds
- Authors: Yang-Hui He, and Andre Lukas
- Abstract summary: Hodge numbers depend non-trivially on the underlying manifold data.
We study supervised learning of the Hodge numbers h1,1 and h3,1 for Calabi-Yau four-folds.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hodge numbers of Calabi-Yau manifolds depend non-trivially on the underlying
manifold data and they present an interesting challenge for machine learning.
In this letter we consider the data set of complete intersection Calabi-Yau
four-folds, a set of about 900,000 topological types, and study supervised
learning of the Hodge numbers h^1,1 and h^3,1 for these manifolds. We find that
h^1,1 can be successfully learned (to 96% precision) by fully connected
classifier and regressor networks. While both types of networks fail for h^3,1,
we show that a more complicated two-branch network, combined with feature
enhancement, can act as an efficient regressor (to 98% precision) for h^3,1, at
least for a subset of the data. This hints at the existence of an, as yet
unknown, formula for Hodge numbers.
Related papers
- Bayesian Inference with Deep Weakly Nonlinear Networks [57.95116787699412]
We show at a physics level of rigor that Bayesian inference with a fully connected neural network is solvable.
We provide techniques to compute the model evidence and posterior to arbitrary order in $1/N$ and at arbitrary temperature.
arXiv Detail & Related papers (2024-05-26T17:08:04Z) - Deep learning complete intersection Calabi-Yau manifolds [0.0]
We review advancements in deep learning techniques for complete intersection Calabi-Yau (CICY) 3- and 4-folds.
We first discuss methodological aspects and data analysis, before describing neural networks architectures.
We include new results on extrapolating predictions from low to high Hodge numbers, and conversely.
arXiv Detail & Related papers (2023-11-20T15:37:39Z) - Constructing and Machine Learning Calabi-Yau Five-folds [0.0]
Supervised machine learning is performed on the cohomological data.
We construct all possible complete intersection Calabi-Yau five-folds in a product of four or less complex projective spaces.
We find that $h1,1$ can be learnt very efficiently, with very high $R2$ score and an accuracy of $96%$, i.e. $96 %$ of the predictions exactly match the correct values.
arXiv Detail & Related papers (2023-10-24T16:07:08Z) - Deep multi-task mining Calabi-Yau four-folds [6.805575417034372]
We consider the dataset of all Calabi-Yau four-folds constructed as complete intersections in products of projective spaces.
With 30% (80%) training ratio, we reach an accuracy of 100% for $h (1,1)$ and 97% for $h (2,1)$.
arXiv Detail & Related papers (2021-08-04T18:00:15Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models [56.98280399449707]
We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
arXiv Detail & Related papers (2020-12-14T18:14:08Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - Machine learning for complete intersection Calabi-Yau manifolds: a
methodological study [0.0]
We revisit the question of predicting Hodge numbers $h1,1$ and $h2,1$ of complete Calabi-Yau intersections using machine learning (ML)
We obtain 97% (resp. 99%) accuracy for $h1,1$ using a neural network inspired by the Inception model for the old dataset, using only 30% (resp. 70%) of the data for training.
For the new one, a simple linear regression leads to almost 100% accuracy with 30% of the data for training.
arXiv Detail & Related papers (2020-07-30T19:43:49Z) - Inception Neural Network for Complete Intersection Calabi-Yau 3-folds [0.0]
We introduce a neural network inspired by Google's Inception model to compute the Hodge number $h1,1$ of complete intersection Calabi-Yau (CICY) 3-folds.
This architecture improves largely the accuracy of the predictions over existing results, giving already 97% of accuracy with just 30% of the data for training.
arXiv Detail & Related papers (2020-07-27T08:56:19Z) - Deep Polynomial Neural Networks [77.70761658507507]
$Pi$Nets are a new class of function approximators based on expansions.
$Pi$Nets produce state-the-art results in three challenging tasks, i.e. image generation, face verification and 3D mesh representation learning.
arXiv Detail & Related papers (2020-06-20T16:23:32Z) - Backward Feature Correction: How Deep Learning Performs Deep
(Hierarchical) Learning [66.05472746340142]
This paper analyzes how multi-layer neural networks can perform hierarchical learning _efficiently_ and _automatically_ by SGD on the training objective.
We establish a new principle called "backward feature correction", where the errors in the lower-level features can be automatically corrected when training together with the higher-level layers.
arXiv Detail & Related papers (2020-01-13T17:28:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.