Deep multi-task mining Calabi-Yau four-folds
- URL: http://arxiv.org/abs/2108.02221v1
- Date: Wed, 4 Aug 2021 18:00:15 GMT
- Title: Deep multi-task mining Calabi-Yau four-folds
- Authors: Harold Erbin, Riccardo Finotello, Robin Schneider and Mohamed
Tamaazousti
- Abstract summary: We consider the dataset of all Calabi-Yau four-folds constructed as complete intersections in products of projective spaces.
With 30% (80%) training ratio, we reach an accuracy of 100% for $h (1,1)$ and 97% for $h (2,1)$.
- Score: 6.805575417034372
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We continue earlier efforts in computing the dimensions of tangent space
cohomologies of Calabi-Yau manifolds using deep learning. In this paper, we
consider the dataset of all Calabi-Yau four-folds constructed as complete
intersections in products of projective spaces. Employing neural networks
inspired by state-of-the-art computer vision architectures, we improve earlier
benchmarks and demonstrate that all four non-trivial Hodge numbers can be
learned at the same time using a multi-task architecture. With 30% (80%)
training ratio, we reach an accuracy of 100% for $h^{(1,1)}$ and 97% for
$h^{(2,1)}$ (100% for both), 81% (96%) for $h^{(3,1)}$, and 49% (83%) for
$h^{(2,2)}$. Assuming that the Euler number is known, as it is easy to compute,
and taking into account the linear constraint arising from index computations,
we get 100% total accuracy.
Related papers
- Fast computation of 2-isogenies in dimension 4 and cryptographic applications [0.0]
We present algorithms to compute chains of $2$-isogenies between abelian varieties of dimension $ggeq 1$ with theta-coordinates of level $n=2$.
We are able to run a complete key recovery attack on SIDH when the endomorphism ring of the starting curve is unknown within a few seconds on a laptop.
arXiv Detail & Related papers (2024-07-22T09:19:20Z) - Constructing and Machine Learning Calabi-Yau Five-folds [0.0]
Supervised machine learning is performed on the cohomological data.
We construct all possible complete intersection Calabi-Yau five-folds in a product of four or less complex projective spaces.
We find that $h1,1$ can be learnt very efficiently, with very high $R2$ score and an accuracy of $96%$, i.e. $96 %$ of the predictions exactly match the correct values.
arXiv Detail & Related papers (2023-10-24T16:07:08Z) - Understanding Deep Neural Function Approximation in Reinforcement
Learning via $\epsilon$-Greedy Exploration [53.90873926758026]
This paper provides a theoretical study of deep neural function approximation in reinforcement learning (RL)
We focus on the value based algorithm with the $epsilon$-greedy exploration via deep (and two-layer) neural networks endowed by Besov (and Barron) function spaces.
Our analysis reformulates the temporal difference error in an $L2(mathrmdmu)$-integrable space over a certain averaged measure $mu$, and transforms it to a generalization problem under the non-iid setting.
arXiv Detail & Related papers (2022-09-15T15:42:47Z) - Identifying good directions to escape the NTK regime and efficiently
learn low-degree plus sparse polynomials [52.11466135206223]
We show that a wide two-layer neural network can jointly use the Tangent Kernel (NTK) and the QuadNTK to fit target functions.
This yields an end to end convergence and guarantee with provable sample improvement over both the NTK and QuadNTK on their own.
arXiv Detail & Related papers (2022-06-08T06:06:51Z) - Efficient and Generic 1D Dilated Convolution Layer for Deep Learning [52.899995651639436]
We introduce our efficient implementation of a generic 1D convolution layer covering a wide range of parameters.
It is optimized for x86 CPU architectures, in particular, for architectures containing Intel AVX-512 and AVX-512 BFloat16 instructions.
We demonstrate the performance of our optimized 1D convolution layer by utilizing it in the end-to-end neural network training with real genomics datasets.
arXiv Detail & Related papers (2021-04-16T09:54:30Z) - Involution: Inverting the Inherence of Convolution for Visual
Recognition [72.88582255910835]
We present a novel atomic operation for deep neural networks by inverting the principles of convolution, coined as involution.
The proposed involution operator could be leveraged as fundamental bricks to build the new generation of neural networks for visual recognition.
Our involution-based models improve the performance of convolutional baselines using ResNet-50 by up to 1.6% top-1 accuracy, 2.5% and 2.4% bounding box AP, and 4.7% mean IoU absolutely.
arXiv Detail & Related papers (2021-03-10T18:40:46Z) - Small Covers for Near-Zero Sets of Polynomials and Learning Latent
Variable Models [56.98280399449707]
We show that there exists an $epsilon$-cover for $S$ of cardinality $M = (k/epsilon)O_d(k1/d)$.
Building on our structural result, we obtain significantly improved learning algorithms for several fundamental high-dimensional probabilistic models hidden variables.
arXiv Detail & Related papers (2020-12-14T18:14:08Z) - Machine Learning Calabi-Yau Four-folds [0.0]
Hodge numbers depend non-trivially on the underlying manifold data.
We study supervised learning of the Hodge numbers h1,1 and h3,1 for Calabi-Yau four-folds.
arXiv Detail & Related papers (2020-09-05T14:54:25Z) - Machine learning for complete intersection Calabi-Yau manifolds: a
methodological study [0.0]
We revisit the question of predicting Hodge numbers $h1,1$ and $h2,1$ of complete Calabi-Yau intersections using machine learning (ML)
We obtain 97% (resp. 99%) accuracy for $h1,1$ using a neural network inspired by the Inception model for the old dataset, using only 30% (resp. 70%) of the data for training.
For the new one, a simple linear regression leads to almost 100% accuracy with 30% of the data for training.
arXiv Detail & Related papers (2020-07-30T19:43:49Z) - Inception Neural Network for Complete Intersection Calabi-Yau 3-folds [0.0]
We introduce a neural network inspired by Google's Inception model to compute the Hodge number $h1,1$ of complete intersection Calabi-Yau (CICY) 3-folds.
This architecture improves largely the accuracy of the predictions over existing results, giving already 97% of accuracy with just 30% of the data for training.
arXiv Detail & Related papers (2020-07-27T08:56:19Z) - Backward Feature Correction: How Deep Learning Performs Deep
(Hierarchical) Learning [66.05472746340142]
This paper analyzes how multi-layer neural networks can perform hierarchical learning _efficiently_ and _automatically_ by SGD on the training objective.
We establish a new principle called "backward feature correction", where the errors in the lower-level features can be automatically corrected when training together with the higher-level layers.
arXiv Detail & Related papers (2020-01-13T17:28:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.