Equivariant geometric learning for digital rock physics: estimating
formation factor and effective permeability tensors from Morse graph
- URL: http://arxiv.org/abs/2104.05608v1
- Date: Mon, 12 Apr 2021 16:28:25 GMT
- Title: Equivariant geometric learning for digital rock physics: estimating
formation factor and effective permeability tensors from Morse graph
- Authors: Chen Cai, Nikolaos Vlassis, Lucas Magee, Ran Ma, Zeyu Xiong, Bahador
Bahmani, Teng-Fong Wong, Yusu Wang, WaiChing Sun
- Abstract summary: We present a SE(3)-equivariant graph neural network (GNN) approach that directly predicts formation factor and effective permeability from micro-CT images.
FFT solvers are established to compute both the formation factor and effective permeability, while the topology and geometry of the pore space are represented by a persistence-based Morse graph.
- Score: 7.355408124040856
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a SE(3)-equivariant graph neural network (GNN) approach that
directly predicting the formation factor and effective permeability from
micro-CT images. FFT solvers are established to compute both the formation
factor and effective permeability, while the topology and geometry of the pore
space are represented by a persistence-based Morse graph. Together, they
constitute the database for training, validating, and testing the neural
networks. While the graph and Euclidean convolutional approaches both employ
neural networks to generate low-dimensional latent space to represent the
features of the micro-structures for forward predictions, the SE(3) equivariant
neural network is found to generate more accurate predictions, especially when
the training data is limited. Numerical experiments have also shown that the
new SE(3) approach leads to predictions that fulfill the material frame
indifference whereas the predictions from classical convolutional neural
networks (CNN) may suffer from spurious dependence on the coordinate system of
the training data. Comparisons among predictions inferred from training the CNN
and those from graph convolutional neural networks (GNN) with and without the
equivariant constraint indicate that the equivariant graph neural network seems
to perform better than the CNN and GNN without enforcing equivariant
constraints.
Related papers
- Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Neural Tangent Kernels Motivate Graph Neural Networks with
Cross-Covariance Graphs [94.44374472696272]
We investigate NTKs and alignment in the context of graph neural networks (GNNs)
Our results establish the theoretical guarantees on the optimality of the alignment for a two-layer GNN.
These guarantees are characterized by the graph shift operator being a function of the cross-covariance between the input and the output data.
arXiv Detail & Related papers (2023-10-16T19:54:21Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Variational Tensor Neural Networks for Deep Learning [0.0]
We propose an integration of tensor networks (TN) into deep neural networks (NNs)
This in turn, results in a scalable tensor neural network (TNN) architecture capable of efficient training over a large parameter space.
We validate the accuracy and efficiency of our method by designing TNN models and providing benchmark results for linear and non-linear regressions, data classification and image recognition on MNIST handwritten digits.
arXiv Detail & Related papers (2022-11-26T20:24:36Z) - Parameter Convex Neural Networks [13.42851919291587]
We propose the exponential multilayer neural network (EMLP) which is convex with regard to the parameters of the neural network under some conditions.
For late experiments, we use the same architecture to make the exponential graph convolutional network (EGCN) and do the experiment on the graph classificaion dataset.
arXiv Detail & Related papers (2022-06-11T16:44:59Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - CCasGNN: Collaborative Cascade Prediction Based on Graph Neural Networks [0.49269463638915806]
Cascade prediction aims at modeling information diffusion in the network.
Recent efforts devoted to combining network structure and sequence features by graph neural networks and recurrent neural networks.
We propose a novel method CCasGNN considering the individual profile, structural features, and sequence information.
arXiv Detail & Related papers (2021-12-07T11:37:36Z) - Persistent Homology Captures the Generalization of Neural Networks
Without A Validation Set [0.0]
We suggest studying the training of neural networks with Algebraic Topology, specifically Persistent Homology.
Using simplicial complex representations of neural networks, we study the PH diagram distance evolution on the neural network learning process.
Results show that the PH diagram distance between consecutive neural network states correlates with the validation accuracy.
arXiv Detail & Related papers (2021-05-31T09:17:31Z) - How Neural Networks Extrapolate: From Feedforward to Graph Neural
Networks [80.55378250013496]
We study how neural networks trained by gradient descent extrapolate what they learn outside the support of the training distribution.
Graph Neural Networks (GNNs) have shown some success in more complex tasks.
arXiv Detail & Related papers (2020-09-24T17:48:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.