Predicting Porosity, Permeability, and Tortuosity of Porous Media from
Images by Deep Learning
- URL: http://arxiv.org/abs/2007.02820v1
- Date: Mon, 6 Jul 2020 15:27:14 GMT
- Title: Predicting Porosity, Permeability, and Tortuosity of Porous Media from
Images by Deep Learning
- Authors: Krzysztof M. Graczyk and Maciej Matyka
- Abstract summary: Convolutional neural networks (CNN) are utilized to encode the relation between initial configurations of obstacles and three fundamental quantities in porous media.
It is demonstrated that the CNNs are able to predict the porosity, permeability, and tortuosity with good accuracy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks (CNN) are utilized to encode the relation
between initial configurations of obstacles and three fundamental quantities in
porous media: porosity ($\varphi$), permeability $k$, and tortuosity ($T$). The
two-dimensional systems with obstacles are considered. The fluid flow through a
porous medium is simulated with the lattice Boltzmann method. It is
demonstrated that the CNNs are able to predict the porosity, permeability, and
tortuosity with good accuracy. With the usage of the CNN models, the relation
between $T$ and $\varphi$ has been reproduced and compared with the empirical
estimate. The analysis has been performed for the systems with $\varphi \in
(0.37,0.99)$ which covers five orders of magnitude span for permeability $k \in
(0.78, 2.1\times 10^5)$ and tortuosity $T \in (1.03,2.74)$.
Related papers
- Beyond Closure Models: Learning Chaotic-Systems via Physics-Informed Neural Operators [78.64101336150419]
Predicting the long-term behavior of chaotic systems is crucial for various applications such as climate modeling.
An alternative approach to such a full-resolved simulation is using a coarse grid and then correcting its errors through a temporalittext model.
We propose an alternative end-to-end learning approach using a physics-informed neural operator (PINO) that overcomes this limitation.
arXiv Detail & Related papers (2024-08-09T17:05:45Z) - Bayesian Inference with Deep Weakly Nonlinear Networks [57.95116787699412]
We show at a physics level of rigor that Bayesian inference with a fully connected neural network is solvable.
We provide techniques to compute the model evidence and posterior to arbitrary order in $1/N$ and at arbitrary temperature.
arXiv Detail & Related papers (2024-05-26T17:08:04Z) - Matching the Statistical Query Lower Bound for k-sparse Parity Problems with Stochastic Gradient Descent [83.85536329832722]
We show that gradient descent (SGD) can efficiently solve the $k$-parity problem on a $d$dimensional hypercube.
We then demonstrate how a trained neural network with SGD, solving the $k$-parity problem with small statistical errors.
arXiv Detail & Related papers (2024-04-18T17:57:53Z) - Combining Image- and Geometric-based Deep Learning for Shape Regression:
A Comparison to Pixel-level Methods for Segmentation in Chest X-Ray [0.07143413923310668]
We propose a novel hybrid method that combines a lightweight CNN backbone with a geometric neural network (Point Transformer) for shape regression.
We include the nnU-Net as an upper baseline, which has $3.7times$ more trainable parameters than our proposed method.
arXiv Detail & Related papers (2024-01-15T09:03:50Z) - Predicting the wall-shear stress and wall pressure through convolutional
neural networks [1.95992742032823]
This study aims to assess the capability of convolution-based neural networks to predict wall quantities in a turbulent open channel flow.
The predictions from the FCN are compared against the predictions from a proposed R-Net architecture.
The R-Net is also able to predict the wall-shear-stress and wall-pressure fields using the velocity-fluctuation fields at $y+ = 50$.
arXiv Detail & Related papers (2023-03-01T18:03:42Z) - Improved techniques for deterministic l2 robustness [63.34032156196848]
Training convolutional neural networks (CNNs) with a strict 1-Lipschitz constraint under the $l_2$ norm is useful for adversarial robustness, interpretable gradients and stable training.
We introduce a procedure to certify robustness of 1-Lipschitz CNNs by replacing the last linear layer with a 1-hidden layer.
We significantly advance the state-of-the-art for standard and provable robust accuracies on CIFAR-10 and CIFAR-100.
arXiv Detail & Related papers (2022-11-15T19:10:12Z) - Scalable Lipschitz Residual Networks with Convex Potential Flows [120.27516256281359]
We show that using convex potentials in a residual network gradient flow provides a built-in $1$-Lipschitz transformation.
A comprehensive set of experiments on CIFAR-10 demonstrates the scalability of our architecture and the benefit of our approach for $ell$ provable defenses.
arXiv Detail & Related papers (2021-10-25T07:12:53Z) - The Rate of Convergence of Variation-Constrained Deep Neural Networks [35.393855471751756]
We show that a class of variation-constrained neural networks can achieve near-parametric rate $n-1/2+delta$ for an arbitrarily small constant $delta$.
The result indicates that the neural function space needed for approximating smooth functions may not be as large as what is often perceived.
arXiv Detail & Related papers (2021-06-22T21:28:00Z) - Fundamental tradeoffs between memorization and robustness in random
features and neural tangent regimes [15.76663241036412]
We prove for a large class of activation functions that, if the model memorizes even a fraction of the training, then its Sobolev-seminorm is lower-bounded.
Experiments reveal for the first time, (iv) a multiple-descent phenomenon in the robustness of the min-norm interpolator.
arXiv Detail & Related papers (2021-06-04T17:52:50Z) - Function approximation by deep neural networks with parameters $\{0,\pm
\frac{1}{2}, \pm 1, 2\}$ [91.3755431537592]
It is shown that $C_beta$-smooth functions can be approximated by neural networks with parameters $0,pm frac12, pm 1, 2$.
The depth, width and the number of active parameters of constructed networks have, up to a logarithimc factor, the same dependence on the approximation error as the networks with parameters in $[-1,1]$.
arXiv Detail & Related papers (2021-03-15T19:10:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.