Deep learning complete intersection Calabi-Yau manifolds
- URL: http://arxiv.org/abs/2311.11847v1
- Date: Mon, 20 Nov 2023 15:37:39 GMT
- Title: Deep learning complete intersection Calabi-Yau manifolds
- Authors: Harold Erbin, Riccardo Finotello
- Abstract summary: We review advancements in deep learning techniques for complete intersection Calabi-Yau (CICY) 3- and 4-folds.
We first discuss methodological aspects and data analysis, before describing neural networks architectures.
We include new results on extrapolating predictions from low to high Hodge numbers, and conversely.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We review advancements in deep learning techniques for complete intersection
Calabi-Yau (CICY) 3- and 4-folds, with the aim of understanding better how to
handle algebraic topological data with machine learning. We first discuss
methodological aspects and data analysis, before describing neural networks
architectures. Then, we describe the state-of-the art accuracy in predicting
Hodge numbers. We include new results on extrapolating predictions from low to
high Hodge numbers, and conversely.
Related papers
- Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Deep learning applied to computational mechanics: A comprehensive
review, state of the art, and the classics [77.34726150561087]
Recent developments in artificial neural networks, particularly deep learning (DL), are reviewed in detail.
Both hybrid and pure machine learning (ML) methods are discussed.
History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics.
arXiv Detail & Related papers (2022-12-18T02:03:00Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Look beyond labels: Incorporating functional summary information in
Bayesian neural networks [11.874130244353253]
We present a simple approach to incorporate summary information about the predicted probability.
The available summary information is incorporated as augmented data and modeled with a Dirichlet process.
We show how the method can inform the model about task difficulty or class imbalance.
arXiv Detail & Related papers (2022-07-04T07:06:45Z) - Rethinking Bayesian Learning for Data Analysis: The Art of Prior and
Inference in Sparsity-Aware Modeling [20.296566563098057]
Sparse modeling for signal processing and machine learning has been at the focus of scientific research for over two decades.
This article reviews some recent advances in incorporating sparsity-promoting priors into three popular data modeling tools.
arXiv Detail & Related papers (2022-05-28T00:43:52Z) - Machine Learning Calabi-Yau Hypersurfaces [0.0]
We revisit the classic database of weighted-P4s which admit Calabi-Yau 3-fold hypersurfaces.
Unsupervised techniques identify an unanticipated almost linear dependence of the topological data on the weights.
Supervised techniques are successful in predicting the topological parameters of the hypersurface from its weights with an accuracy of R2 > 95%.
arXiv Detail & Related papers (2021-12-12T23:17:31Z) - Machine learning for complete intersection Calabi-Yau manifolds: a
methodological study [0.0]
We revisit the question of predicting Hodge numbers $h1,1$ and $h2,1$ of complete Calabi-Yau intersections using machine learning (ML)
We obtain 97% (resp. 99%) accuracy for $h1,1$ using a neural network inspired by the Inception model for the old dataset, using only 30% (resp. 70%) of the data for training.
For the new one, a simple linear regression leads to almost 100% accuracy with 30% of the data for training.
arXiv Detail & Related papers (2020-07-30T19:43:49Z) - Inception Neural Network for Complete Intersection Calabi-Yau 3-folds [0.0]
We introduce a neural network inspired by Google's Inception model to compute the Hodge number $h1,1$ of complete intersection Calabi-Yau (CICY) 3-folds.
This architecture improves largely the accuracy of the predictions over existing results, giving already 97% of accuracy with just 30% of the data for training.
arXiv Detail & Related papers (2020-07-27T08:56:19Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Improved Code Summarization via a Graph Neural Network [96.03715569092523]
In general, source code summarization techniques use the source code as input and outputs a natural language description.
We present an approach that uses a graph-based neural architecture that better matches the default structure of the AST to generate these summaries.
arXiv Detail & Related papers (2020-04-06T17:36:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.