Deep learning for diffusion in porous media
- URL: http://arxiv.org/abs/2304.02104v2
- Date: Tue, 6 Jun 2023 07:27:32 GMT
- Title: Deep learning for diffusion in porous media
- Authors: Krzysztof M. Graczyk, Dawid Strzelczyk, Maciej Matyka
- Abstract summary: We adopt convolutional neural networks (CNN) to predict the basic properties of the porous media.
Two different media types are considered: one mimics the sand packings, and the other mimics the systems derived from the extracellular space of biological tissues.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We adopt convolutional neural networks (CNN) to predict the basic properties
of the porous media. Two different media types are considered: one mimics the
sand packings, and the other mimics the systems derived from the extracellular
space of biological tissues. The Lattice Boltzmann Method is used to obtain the
labeled data necessary for performing supervised learning. We distinguish two
tasks. In the first, networks based on the analysis of the system's geometry
predict porosity and effective diffusion coefficient. In the second, networks
reconstruct the concentration map. In the first task, we propose two types of
CNN models: the C-Net and the encoder part of the U-Net. Both networks are
modified by adding a self-normalization module [Graczyk \textit{et al.}, Sci
Rep 12, 10583 (2022)]. The models predict with reasonable accuracy but only
within the data type, they are trained on. For instance, the model trained on
sand packings-like samples overshoots or undershoots for biological-like
samples. In the second task, we propose the usage of the U-Net architecture. It
accurately reconstructs the concentration fields. In contrast to the first
task, the network trained on one data type works well for the other. For
instance, the model trained on sand packings-like samples works perfectly on
biological-like samples. Eventually, for both types of the data, we fit
exponents in the Archie's law to find tortuosity that is used to describe the
dependence of the effective diffusion on porosity.
Related papers
- DeNetDM: Debiasing by Network Depth Modulation [6.550893772143]
We present DeNetDM, a novel debiasing method that uses network depth modulation as a way of developing robustness to spurious correlations.
Our method requires no bias annotations or explicit data augmentation while performing on par with approaches that require either or both.
We demonstrate that DeNetDM outperforms existing debiasing techniques on both synthetic and real-world datasets by 5%.
arXiv Detail & Related papers (2024-03-28T22:17:19Z) - BEND: Bagging Deep Learning Training Based on Efficient Neural Network Diffusion [56.9358325168226]
We propose a Bagging deep learning training algorithm based on Efficient Neural network Diffusion (BEND)
Our approach is simple but effective, first using multiple trained model weights and biases as inputs to train autoencoder and latent diffusion model.
Our proposed BEND algorithm can consistently outperform the mean and median accuracies of both the original trained model and the diffused model.
arXiv Detail & Related papers (2024-03-23T08:40:38Z) - Sampling weights of deep neural networks [1.2370077627846041]
We introduce a probability distribution, combined with an efficient sampling algorithm, for weights and biases of fully-connected neural networks.
In a supervised learning context, no iterative optimization or gradient computations of internal network parameters are needed.
We prove that sampled networks are universal approximators.
arXiv Detail & Related papers (2023-06-29T10:13:36Z) - OCD: Learning to Overfit with Conditional Diffusion Models [95.1828574518325]
We present a dynamic model in which the weights are conditioned on an input sample x.
We learn to match those weights that would be obtained by finetuning a base model on x and its label y.
arXiv Detail & Related papers (2022-10-02T09:42:47Z) - An Empirical Investigation of Model-to-Model Distribution Shifts in
Trained Convolutional Filters [2.0305676256390934]
We present first empirical results from our ongoing investigation of distribution shifts in image data used for various computer vision tasks.
Instead of analyzing the original training and test data, we propose to study shifts in the learned weights of trained models.
arXiv Detail & Related papers (2022-01-20T21:48:12Z) - Reasoning-Modulated Representations [85.08205744191078]
We study a common setting where our task is not purely opaque.
Our approach paves the way for a new class of data-efficient representation learning.
arXiv Detail & Related papers (2021-07-19T13:57:13Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Classification of fNIRS Data Under Uncertainty: A Bayesian Neural
Network Approach [0.15229257192293197]
We use a Bayesian Neural Network (BNN) to carry out a binary classification on an open-access dataset.
Our model produced an overall classification accuracy of 86.44% over 30 volunteers.
arXiv Detail & Related papers (2021-01-18T15:43:59Z) - MOCCA: Multi-Layer One-Class ClassificAtion for Anomaly Detection [16.914663209964697]
We propose our deep learning approach to the anomaly detection problem named Multi-LayerOne-Class Classification (MOCCA)
We explicitly leverage the piece-wise nature of deep neural networks by exploiting information extracted at different depths to detect abnormal data instances.
We show that our method reaches superior performances compared to the state-of-the-art approaches available in the literature.
arXiv Detail & Related papers (2020-12-09T08:32:56Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - Generalized ODIN: Detecting Out-of-distribution Image without Learning
from Out-of-distribution Data [87.61504710345528]
We propose two strategies for freeing a neural network from tuning with OoD data, while improving its OoD detection performance.
We specifically propose to decompose confidence scoring as well as a modified input pre-processing method.
Our further analysis on a larger scale image dataset shows that the two types of distribution shifts, specifically semantic shift and non-semantic shift, present a significant difference.
arXiv Detail & Related papers (2020-02-26T04:18:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.