Squeeze flow of micro-droplets: convolutional neural network with
trainable and tunable refinement
- URL: http://arxiv.org/abs/2211.09061v1
- Date: Wed, 16 Nov 2022 17:22:46 GMT
- Title: Squeeze flow of micro-droplets: convolutional neural network with
trainable and tunable refinement
- Authors: Aryan Mehboudi, Shrawan Singhal, S.V. Sreenivasan
- Abstract summary: In the first part of this paper, we present the governing partial differential equations to lay out the underlying physics of the problem.
We also discuss our developed Python package, sqflow, which can potentially serve as free, flexible, and scalable standardized benchmarks in the fields of machine learning and computer vision.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a platform based on neural networks to solve the image-to-image
translation problem in the context of squeeze flow of micro-droplets. In the
first part of this paper, we present the governing partial differential
equations to lay out the underlying physics of the problem. We also discuss our
developed Python package, sqflow, which can potentially serve as free,
flexible, and scalable standardized benchmarks in the fields of machine
learning and computer vision. In the second part of this paper, we introduce a
residual convolutional neural network to solve the corresponding inverse
problem: to translate a high-resolution (HR) imprint image with a specific
liquid film thickness to a low-resolution (LR) droplet pattern image capable of
producing the given imprint image for an appropriate spread time of droplets.
We propose a neural network architecture that learns to systematically tune the
refinement level of its residual convolutional blocks by using the function
approximators that are trained to map a given input parameter (film thickness)
to an appropriate refinement level indicator. We use multiple stacks of
convolutional layers the output of which is translated according to the
refinement level indicators provided by the directly-connected function
approximators. Together with a non-linear activation function, such a
translation mechanism enables the HR imprint image to be refined sequentially
in multiple steps until the target LR droplet pattern image is revealed. The
proposed platform can be potentially applied to data compression and data
encryption. The developed package and datasets are publicly available on GitHub
at https://github.com/sqflow/sqflow.
Related papers
- Compression with Bayesian Implicit Neural Representations [16.593537431810237]
We propose overfitting variational neural networks to the data and compressing an approximate posterior weight sample using relative entropy coding instead of quantizing and entropy coding it.
Experiments show that our method achieves strong performance on image and audio compression while retaining simplicity.
arXiv Detail & Related papers (2023-05-30T16:29:52Z) - Lossy Image Compression with Conditional Diffusion Models [25.158390422252097]
This paper outlines an end-to-end optimized lossy image compression framework using diffusion generative models.
In contrast to VAE-based neural compression, where the (mean) decoder is a deterministic neural network, our decoder is a conditional diffusion model.
Our approach yields stronger reported FID scores than the GAN-based model, while also yielding competitive performance with VAE-based models in several distortion metrics.
arXiv Detail & Related papers (2022-09-14T21:53:27Z) - Neural Implicit Dictionary via Mixture-of-Expert Training [111.08941206369508]
We present a generic INR framework that achieves both data and training efficiency by learning a Neural Implicit Dictionary (NID)
Our NID assembles a group of coordinate-based Impworks which are tuned to span the desired function space.
Our experiments show that, NID can improve reconstruction of 2D images or 3D scenes by 2 orders of magnitude faster with up to 98% less input data.
arXiv Detail & Related papers (2022-07-08T05:07:19Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data
via Differentiable Cross-Approximation [53.95297550117153]
We propose an end-to-end trainable framework that processes large-scale visual data tensors by looking emphat a fraction of their entries only.
The proposed approach is particularly useful for large-scale multidimensional grid data, and for tasks that require context over a large receptive field.
arXiv Detail & Related papers (2021-05-29T08:39:57Z) - DeFlow: Learning Complex Image Degradations from Unpaired Data with
Conditional Flows [145.83812019515818]
We propose DeFlow, a method for learning image degradations from unpaired data.
We model the degradation process in the latent space of a shared flow-decoder network.
We validate our DeFlow formulation on the task of joint image restoration and super-resolution.
arXiv Detail & Related papers (2021-01-14T18:58:01Z) - Principled network extraction from images [0.0]
We present a principled model to extract network topologies from images that is scalable and efficient.
We test our model on real images of the retinal vascular system, slime mold and river networks.
arXiv Detail & Related papers (2020-12-23T15:56:09Z) - A Flexible Framework for Designing Trainable Priors with Adaptive
Smoothing and Game Encoding [57.1077544780653]
We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems.
We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions.
This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end.
arXiv Detail & Related papers (2020-06-26T08:34:54Z) - Neural Sparse Representation for Image Restoration [116.72107034624344]
Inspired by the robustness and efficiency of sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
Our method structurally enforces sparsity constraints upon hidden neurons.
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks.
arXiv Detail & Related papers (2020-06-08T05:15:17Z) - Radon cumulative distribution transform subspace modeling for image
classification [18.709734704950804]
We present a new supervised image classification method applicable to a broad class of image deformation models.
The method makes use of the previously described Radon Cumulative Distribution Transform (R-CDT) for image data.
In addition to the test accuracy performances, we show improvements in terms of computational efficiency.
arXiv Detail & Related papers (2020-04-07T19:47:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.