MeshfreeFlowNet: A Physics-Constrained Deep Continuous Space-Time
Super-Resolution Framework
- URL: http://arxiv.org/abs/2005.01463v2
- Date: Fri, 21 Aug 2020 04:08:23 GMT
- Title: MeshfreeFlowNet: A Physics-Constrained Deep Continuous Space-Time
Super-Resolution Framework
- Authors: Chiyu Max Jiang, Soheil Esmaeilzadeh, Kamyar Azizzadenesheli, Karthik
Kashinath, Mustafa Mustafa, Hamdi A. Tchelepi, Philip Marcus, Prabhat, Anima
Anandkumar
- Abstract summary: MeshfreeFlowNet is a framework to generate continuous (grid-free)-80% solutions from the low-resolution inputs.
MeshfreeFlowNet allows for (i) output to be sampled at all resolutions, (ii) training on fixed-size inputs on arbitrarily sized-temporal domains.
We propose a large scale implementation of MeshfreeFlowNet and show that it efficiently scales across large clusters.
- Score: 58.49761896587656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose MeshfreeFlowNet, a novel deep learning-based super-resolution
framework to generate continuous (grid-free) spatio-temporal solutions from the
low-resolution inputs. While being computationally efficient, MeshfreeFlowNet
accurately recovers the fine-scale quantities of interest. MeshfreeFlowNet
allows for: (i) the output to be sampled at all spatio-temporal resolutions,
(ii) a set of Partial Differential Equation (PDE) constraints to be imposed,
and (iii) training on fixed-size inputs on arbitrarily sized spatio-temporal
domains owing to its fully convolutional encoder. We empirically study the
performance of MeshfreeFlowNet on the task of super-resolution of turbulent
flows in the Rayleigh-Benard convection problem. Across a diverse set of
evaluation metrics, we show that MeshfreeFlowNet significantly outperforms
existing baselines. Furthermore, we provide a large scale implementation of
MeshfreeFlowNet and show that it efficiently scales across large clusters,
achieving 96.80% scaling efficiency on up to 128 GPUs and a training time of
less than 4 minutes.
Related papers
- StreamFlow: Streamlined Multi-Frame Optical Flow Estimation for Video
Sequences [31.210626775505407]
Occlusions between consecutive frames have long posed a significant challenge in optical flow estimation.
We present a Streamlined In-batch Multi-frame (SIM) pipeline tailored to video input, attaining a similar level of time efficiency to two-frame networks.
StreamFlow not only excels in terms of performance on challenging KITTI and Sintel datasets, with particular improvement in occluded areas.
arXiv Detail & Related papers (2023-11-28T07:53:51Z) - Let the Flows Tell: Solving Graph Combinatorial Optimization Problems
with GFlowNets [86.43523688236077]
Combinatorial optimization (CO) problems are often NP-hard and out of reach for exact algorithms.
GFlowNets have emerged as a powerful machinery to efficiently sample from composite unnormalized densities sequentially.
In this paper, we design Markov decision processes (MDPs) for different problems and propose to train conditional GFlowNets to sample from the solution space.
arXiv Detail & Related papers (2023-05-26T15:13:09Z) - CFlowNets: Continuous Control with Generative Flow Networks [23.093316128475564]
Generative flow networks (GFlowNets) can be used as an alternative to reinforcement learning for exploratory control tasks.
We propose generative continuous flow networks (CFlowNets) that can be applied to continuous control tasks.
arXiv Detail & Related papers (2023-03-04T14:37:47Z) - PointFlowHop: Green and Interpretable Scene Flow Estimation from
Consecutive Point Clouds [49.7285297470392]
An efficient 3D scene flow estimation method called PointFlowHop is proposed in this work.
PointFlowHop takes two consecutive point clouds and determines the 3D flow vectors for every point in the first point cloud.
It decomposes the scene flow estimation task into a set of subtasks, including ego-motion compensation, object association and object-wise motion estimation.
arXiv Detail & Related papers (2023-02-27T23:06:01Z) - Normalizing flow neural networks by JKO scheme [22.320632565424745]
We develop a neural ODE flow network called JKO-iFlow, inspired by the Jordan-Kinderleherer-Otto scheme.
The proposed method stacks residual blocks one after another, allowing efficient block-wise training of the residual blocks.
Experiments with synthetic and real data show that the proposed JKO-iFlow network achieves competitive performance.
arXiv Detail & Related papers (2022-12-29T18:55:00Z) - Super-resolution GANs of randomly-seeded fields [68.8204255655161]
We propose a novel super-resolution generative adversarial network (GAN) framework to estimate field quantities from random sparse sensors.
The algorithm exploits random sampling to provide incomplete views of the high-resolution underlying distributions.
The proposed technique is tested on synthetic databases of fluid flow simulations, ocean surface temperature distributions measurements, and particle image velocimetry data.
arXiv Detail & Related papers (2022-02-23T18:57:53Z) - GMFlow: Learning Optical Flow via Global Matching [124.57850500778277]
We propose a GMFlow framework for learning optical flow estimation.
It consists of three main components: a customized Transformer for feature enhancement, a correlation and softmax layer for global feature matching, and a self-attention layer for flow propagation.
Our new framework outperforms 32-iteration RAFT's performance on the challenging Sintel benchmark.
arXiv Detail & Related papers (2021-11-26T18:59:56Z) - Self Normalizing Flows [65.73510214694987]
We propose a flexible framework for training normalizing flows by replacing expensive terms in the gradient by learned approximate inverses at each layer.
This reduces the computational complexity of each layer's exact update from $mathcalO(D3)$ to $mathcalO(D2)$.
We show experimentally that such models are remarkably stable and optimize to similar data likelihood values as their exact gradient counterparts.
arXiv Detail & Related papers (2020-11-14T09:51:51Z) - FarSee-Net: Real-Time Semantic Segmentation by Efficient Multi-scale
Context Aggregation and Feature Space Super-resolution [14.226301825772174]
We introduce a novel and efficient module called Cascaded Factorized Atrous Spatial Pyramid Pooling (CF-ASPP)
It is a lightweight cascaded structure for Convolutional Neural Networks (CNNs) to efficiently leverage context information.
We achieve 68.4% mIoU at 84 fps on the Cityscapes test set with a single Nivida Titan X (Maxwell) GPU card.
arXiv Detail & Related papers (2020-03-09T03:53:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.