Deep learning complete intersection Calabi-Yau manifolds
- URL: http://arxiv.org/abs/2311.11847v1
- Date: Mon, 20 Nov 2023 15:37:39 GMT
- Title: Deep learning complete intersection Calabi-Yau manifolds
- Authors: Harold Erbin, Riccardo Finotello
- Abstract summary: We review advancements in deep learning techniques for complete intersection Calabi-Yau (CICY) 3- and 4-folds.
We first discuss methodological aspects and data analysis, before describing neural networks architectures.
We include new results on extrapolating predictions from low to high Hodge numbers, and conversely.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We review advancements in deep learning techniques for complete intersection
Calabi-Yau (CICY) 3- and 4-folds, with the aim of understanding better how to
handle algebraic topological data with machine learning. We first discuss
methodological aspects and data analysis, before describing neural networks
architectures. Then, we describe the state-of-the art accuracy in predicting
Hodge numbers. We include new results on extrapolating predictions from low to
high Hodge numbers, and conversely.
Related papers
- Information plane and compression-gnostic feedback in quantum machine learning [0.0]
The information plane has been proposed as an analytical tool for studying the learning dynamics of neural networks.
We study how the insight on how much the model compresses the input data can be used to improve a learning algorithm.
We benchmark the proposed learning algorithms on several classification and regression tasks using variational quantum circuits.
arXiv Detail & Related papers (2024-11-04T17:38:46Z) - Persistent de Rham-Hodge Laplacians in Eulerian representation for manifold topological learning [7.0103981121698355]
We introduce persistent de Rham-Hodge Laplacian, or persistent Hodge Laplacian, for manifold topological learning.
Our PHLs are constructed in the Eulerian representation via structure-persevering Cartesian grids.
As a proof-of-principle application, we consider the prediction of protein-ligand binding affinities with two benchmark datasets.
arXiv Detail & Related papers (2024-08-01T01:15:52Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Deep learning applied to computational mechanics: A comprehensive
review, state of the art, and the classics [77.34726150561087]
Recent developments in artificial neural networks, particularly deep learning (DL), are reviewed in detail.
Both hybrid and pure machine learning (ML) methods are discussed.
History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics.
arXiv Detail & Related papers (2022-12-18T02:03:00Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Look beyond labels: Incorporating functional summary information in
Bayesian neural networks [11.874130244353253]
We present a simple approach to incorporate summary information about the predicted probability.
The available summary information is incorporated as augmented data and modeled with a Dirichlet process.
We show how the method can inform the model about task difficulty or class imbalance.
arXiv Detail & Related papers (2022-07-04T07:06:45Z) - Machine learning for complete intersection Calabi-Yau manifolds: a
methodological study [0.0]
We revisit the question of predicting Hodge numbers $h1,1$ and $h2,1$ of complete Calabi-Yau intersections using machine learning (ML)
We obtain 97% (resp. 99%) accuracy for $h1,1$ using a neural network inspired by the Inception model for the old dataset, using only 30% (resp. 70%) of the data for training.
For the new one, a simple linear regression leads to almost 100% accuracy with 30% of the data for training.
arXiv Detail & Related papers (2020-07-30T19:43:49Z) - Inception Neural Network for Complete Intersection Calabi-Yau 3-folds [0.0]
We introduce a neural network inspired by Google's Inception model to compute the Hodge number $h1,1$ of complete intersection Calabi-Yau (CICY) 3-folds.
This architecture improves largely the accuracy of the predictions over existing results, giving already 97% of accuracy with just 30% of the data for training.
arXiv Detail & Related papers (2020-07-27T08:56:19Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Improved Code Summarization via a Graph Neural Network [96.03715569092523]
In general, source code summarization techniques use the source code as input and outputs a natural language description.
We present an approach that uses a graph-based neural architecture that better matches the default structure of the AST to generate these summaries.
arXiv Detail & Related papers (2020-04-06T17:36:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.