Point-Cloud Deep Learning of Porous Media for Permeability Prediction
- URL: http://arxiv.org/abs/2107.14038v1
- Date: Sun, 18 Jul 2021 22:59:21 GMT
- Title: Point-Cloud Deep Learning of Porous Media for Permeability Prediction
- Authors: Ali Kashefi and Tapan Mukerji
- Abstract summary: We propose a novel deep learning framework for predicting permeability of porous media from their digital images.
We model the boundary between solid matrix and pore spaces as point clouds and feed them as inputs to a neural network based on the PointNet architecture.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel deep learning framework for predicting permeability of
porous media from their digital images. Unlike convolutional neural networks,
instead of feeding the whole image volume as inputs to the network, we model
the boundary between solid matrix and pore spaces as point clouds and feed them
as inputs to a neural network based on the PointNet architecture. This approach
overcomes the challenge of memory restriction of graphics processing units and
its consequences on the choice of batch size, and convergence. Compared to
convolutional neural networks, the proposed deep learning methodology provides
freedom to select larger batch sizes, due to reducing significantly the size of
network inputs. Specifically, we use the classification branch of PointNet and
adjust it for a regression task. As a test case, two and three dimensional
synthetic digital rock images are considered. We investigate the effect of
different components of our neural network on its performance. We compare our
deep learning strategy with a convolutional neural network from various
perspectives, specifically for maximum possible batch size. We inspect the
generalizability of our network by predicting the permeability of real-world
rock samples as well as synthetic digital rocks that are statistically
different from the samples used during training. The network predicts the
permeability of digital rocks a few thousand times faster than a Lattice
Boltzmann solver with a high level of prediction accuracy.
Related papers
- An experimental comparative study of backpropagation and alternatives for training binary neural networks for image classification [1.0749601922718608]
Binary neural networks promise to reduce the size of deep neural network models.
They may allow the deployment of more powerful models on edge devices.
However, binary neural networks are still proven to be difficult to train using the backpropagation-based gradient descent scheme.
arXiv Detail & Related papers (2024-08-08T13:39:09Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Unsupervised convolutional neural network fusion approach for change
detection in remote sensing images [1.892026266421264]
We introduce a completely unsupervised shallow convolutional neural network (USCNN) fusion approach for change detection.
Our model has three features: the entire training process is conducted in an unsupervised manner, the network architecture is shallow, and the objective function is sparse.
Experimental results on four real remote sensing datasets indicate the feasibility and effectiveness of the proposed approach.
arXiv Detail & Related papers (2023-11-07T03:10:17Z) - Neural Network Pruning as Spectrum Preserving Process [7.386663473785839]
We identify the close connection between matrix spectrum learning and neural network training for dense and convolutional layers.
We propose a matrix sparsification algorithm tailored for neural network pruning that yields better pruning result.
arXiv Detail & Related papers (2023-07-18T05:39:32Z) - FuNNscope: Visual microscope for interactively exploring the loss
landscape of fully connected neural networks [77.34726150561087]
We show how to explore high-dimensional landscape characteristics of neural networks.
We generalize observations on small neural networks to more complex systems.
An interactive dashboard opens up a number of possible application networks.
arXiv Detail & Related papers (2022-04-09T16:41:53Z) - Dive into Layers: Neural Network Capacity Bounding using Algebraic
Geometry [55.57953219617467]
We show that the learnability of a neural network is directly related to its size.
We use Betti numbers to measure the topological geometric complexity of input data and the neural network.
We perform the experiments on a real-world dataset MNIST and the results verify our analysis and conclusion.
arXiv Detail & Related papers (2021-09-03T11:45:51Z) - Compact representations of convolutional neural networks via weight
pruning and quantization [63.417651529192014]
We propose a novel storage format for convolutional neural networks (CNNs) based on source coding and leveraging both weight pruning and quantization.
We achieve a reduction of space occupancy up to 0.6% on fully connected layers and 5.44% on the whole network, while performing at least as competitive as the baseline.
arXiv Detail & Related papers (2021-08-28T20:39:54Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - A Greedy Algorithm for Quantizing Neural Networks [4.683806391173103]
We propose a new computationally efficient method for quantizing the weights of pre- trained neural networks.
Our method deterministically quantizes layers in an iterative fashion with no complicated re-training required.
arXiv Detail & Related papers (2020-10-29T22:53:10Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.