Predicting First Passage Percolation Shapes Using Neural Networks
- URL: http://arxiv.org/abs/2006.14004v1
- Date: Wed, 24 Jun 2020 19:10:21 GMT
- Title: Predicting First Passage Percolation Shapes Using Neural Networks
- Authors: Sebastian Rosengren
- Abstract summary: We construct and fit a neural network able to adequately predict the shape of the set of discovered sites.
The main purpose is to give researchers a new tool for textitquickly getting an impression of the shape from the distribution of the passage times.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many random growth models have the property that the set of discovered sites,
scaled properly, converges to some deterministic set as time grows. Such
results are known as shape theorems. Typically, not much is known about the
shapes. For first passage percolation on $\mathbb{Z}^d$ we only know that the
shape is convex, compact, and inherits all the symmetries of $\mathbb{Z}^d$.
Using simulated data we construct and fit a neural network able to adequately
predict the shape of the set of discovered sites from the mean, standard
deviation, and percentiles of the distribution of the passage times. The
purpose of the note is two-fold. The main purpose is to give researchers a new
tool for \textit{quickly} getting an impression of the shape from the
distribution of the passage times -- instead of having to wait some time for
the simulations to run, as is the only available way today. The second purpose
of the note is simply to introduce modern machine learning methods into this
area of discrete probability, and a hope that it stimulates further research.
Related papers
- von Mises Quasi-Processes for Bayesian Circular Regression [57.88921637944379]
We explore a family of expressive and interpretable distributions over circle-valued random functions.
The resulting probability model has connections with continuous spin models in statistical physics.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Markov Chain Monte Carlo sampling.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - A Dynamical Model of Neural Scaling Laws [79.59705237659547]
We analyze a random feature model trained with gradient descent as a solvable model of network training and generalization.
Our theory shows how the gap between training and test loss can gradually build up over time due to repeated reuse of data.
arXiv Detail & Related papers (2024-02-02T01:41:38Z) - An unfolding method based on conditional Invertible Neural Networks
(cINN) using iterative training [0.0]
Generative networks like invertible neural networks(INN) enable a probabilistic unfolding.
We introduce the iterative conditional INN(IcINN) for unfolding that adjusts for deviations between simulated training samples and data.
arXiv Detail & Related papers (2022-12-16T19:00:05Z) - Unveiling the Sampling Density in Non-Uniform Geometric Graphs [69.93864101024639]
We consider graphs as geometric graphs: nodes are randomly sampled from an underlying metric space, and any pair of nodes is connected if their distance is less than a specified neighborhood radius.
In a social network communities can be modeled as densely sampled areas, and hubs as nodes with larger neighborhood radius.
We develop methods to estimate the unknown sampling density in a self-supervised fashion.
arXiv Detail & Related papers (2022-10-15T08:01:08Z) - Machine Learning Trivializing Maps: A First Step Towards Understanding
How Flow-Based Samplers Scale Up [0.6445605125467573]
We show that approximations of trivializing maps can be machine-learned' by a class of invertible, differentiable models.
We conduct an exploratory scaling study using two-dimensional $phi4$ with up to $202$ lattice sites.
arXiv Detail & Related papers (2021-12-31T16:17:19Z) - A singular Riemannian geometry approach to Deep Neural Networks II.
Reconstruction of 1-D equivalence classes [78.120734120667]
We build the preimage of a point in the output manifold in the input space.
We focus for simplicity on the case of neural networks maps from n-dimensional real spaces to (n - 1)-dimensional real spaces.
arXiv Detail & Related papers (2021-12-17T11:47:45Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - Refinement of Predicted Missing Parts Enhance Point Cloud Completion [62.997667081978825]
Point cloud completion is the task of predicting complete geometry from partial observations using a point set representation for a 3D shape.
Previous approaches propose neural networks to directly estimate the whole point cloud through encoder-decoder models fed by the incomplete point set.
This paper proposes an end-to-end neural network architecture that focuses on computing the missing geometry and merging the known input and the predicted point cloud.
arXiv Detail & Related papers (2020-10-08T22:01:23Z) - Community detection and percolation of information in a geometric
setting [5.027571997864707]
We make the first steps towards generalizing the theory of block models, in the sparse regime.
We consider a geometric random graph over a homogeneous metric space where the probability of two vertices to be connected is an arbitrary function of the distance.
We define a geometric counterpart of the model of flow of information on trees, due to Mossel and Peres.
arXiv Detail & Related papers (2020-06-28T11:23:17Z) - On the Preservation of Spatio-temporal Information in Machine Learning
Applications [0.0]
In machine learning applications, each data attribute is assumed to be of others.
Shift vectors-in $k$means is proposed in a novel framework with the help of sparse representations.
Experiments suggest that feature extraction as a simulation of shallow neural networks provides a little better performance than Gaboral dictionary learning.
arXiv Detail & Related papers (2020-06-15T12:22:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.