Inception Neural Network for Complete Intersection Calabi-Yau 3-folds
- URL: http://arxiv.org/abs/2007.13379v2
- Date: Tue, 16 Feb 2021 08:03:43 GMT
- Title: Inception Neural Network for Complete Intersection Calabi-Yau 3-folds
- Authors: Harold Erbin, Riccardo Finotello
- Abstract summary: We introduce a neural network inspired by Google's Inception model to compute the Hodge number $h1,1$ of complete intersection Calabi-Yau (CICY) 3-folds.
This architecture improves largely the accuracy of the predictions over existing results, giving already 97% of accuracy with just 30% of the data for training.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a neural network inspired by Google's Inception model to compute
the Hodge number $h^{1,1}$ of complete intersection Calabi-Yau (CICY) 3-folds.
This architecture improves largely the accuracy of the predictions over
existing results, giving already 97% of accuracy with just 30% of the data for
training. Moreover, accuracy climbs to 99% when using 80% of the data for
training. This proves that neural networks are a valuable resource to study
geometric aspects in both pure mathematics and string theory.
Related papers
- Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - FR-NAS: Forward-and-Reverse Graph Predictor for Efficient Neural Architecture Search [10.699485270006601]
We introduce a novel Graph Neural Networks (GNN) predictor for Neural Architecture Search (NAS)
This predictor renders neural architectures into vector representations by combining both the conventional and inverse graph views.
The experimental results showcase a significant improvement in prediction accuracy, with a 3%--16% increase in Kendall-tau correlation.
arXiv Detail & Related papers (2024-04-24T03:22:49Z) - Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
on Pruned Neural Networks [79.74580058178594]
We analyze the performance of training a pruned neural network by analyzing the geometric structure of the objective function.
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned.
arXiv Detail & Related papers (2021-10-12T01:11:07Z) - Dive into Layers: Neural Network Capacity Bounding using Algebraic
Geometry [55.57953219617467]
We show that the learnability of a neural network is directly related to its size.
We use Betti numbers to measure the topological geometric complexity of input data and the neural network.
We perform the experiments on a real-world dataset MNIST and the results verify our analysis and conclusion.
arXiv Detail & Related papers (2021-09-03T11:45:51Z) - Deep multi-task mining Calabi-Yau four-folds [6.805575417034372]
We consider the dataset of all Calabi-Yau four-folds constructed as complete intersections in products of projective spaces.
With 30% (80%) training ratio, we reach an accuracy of 100% for $h (1,1)$ and 97% for $h (2,1)$.
arXiv Detail & Related papers (2021-08-04T18:00:15Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - ResPerfNet: Deep Residual Learning for Regressional Performance Modeling
of Deep Neural Networks [0.16311150636417257]
We propose a deep learning-based method, ResPerfNet, which trains a residual neural network with representative datasets obtained on the target platform to predict the performance for a deep neural network.
Our experimental results show that ResPerfNet can accurately predict the execution time of individual neural network layers and full network models on a variety of platforms.
arXiv Detail & Related papers (2020-12-03T03:02:42Z) - Machine learning for complete intersection Calabi-Yau manifolds: a
methodological study [0.0]
We revisit the question of predicting Hodge numbers $h1,1$ and $h2,1$ of complete Calabi-Yau intersections using machine learning (ML)
We obtain 97% (resp. 99%) accuracy for $h1,1$ using a neural network inspired by the Inception model for the old dataset, using only 30% (resp. 70%) of the data for training.
For the new one, a simple linear regression leads to almost 100% accuracy with 30% of the data for training.
arXiv Detail & Related papers (2020-07-30T19:43:49Z) - FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining [65.39532971991778]
We present an accuracy predictor that scores architecture and training recipes jointly, guiding both sample selection and ranking.
We run fast evolutionary searches in just CPU minutes to generate architecture-recipe pairs for a variety of resource constraints.
FBNetV3 makes up a family of state-of-the-art compact neural networks that outperform both automatically and manually-designed competitors.
arXiv Detail & Related papers (2020-06-03T05:20:21Z) - Predicting Neural Network Accuracy from Weights [25.73213712719546]
We show experimentally that the accuracy of a trained neural network can be predicted surprisingly well by looking only at its weights.
We release a collection of 120k convolutional neural networks trained on four different datasets to encourage further research in this area.
arXiv Detail & Related papers (2020-02-26T13:06:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.