StressGAN: A Generative Deep Learning Model for 2D Stress Distribution
Prediction
- URL: http://arxiv.org/abs/2006.11376v1
- Date: Sat, 30 May 2020 00:28:21 GMT
- Title: StressGAN: A Generative Deep Learning Model for 2D Stress Distribution
Prediction
- Authors: Haoliang Jiang, Zhenguo Nie, Roselyn Yeo, Amir Barati Farimani, Levent
Burak Kara
- Abstract summary: We propose a conditional generative adversarial network (cGAN) model for predicting 2D von Mises stress distributions in solid structures.
We demonstrate that our model can predict more accurate high-resolution stress distributions than a baseline convolutional neural network model.
- Score: 0.27998963147546135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Using deep learning to analyze mechanical stress distributions has been
gaining interest with the demand for fast stress analysis methods. Deep
learning approaches have achieved excellent outcomes when utilized to speed up
stress computation and learn the physics without prior knowledge of underlying
equations. However, most studies restrict the variation of geometry or boundary
conditions, making these methods difficult to be generalized to unseen
configurations. We propose a conditional generative adversarial network (cGAN)
model for predicting 2D von Mises stress distributions in solid structures. The
cGAN learns to generate stress distributions conditioned by geometries, load,
and boundary conditions through a two-player minimax game between two neural
networks with no prior knowledge. By evaluating the generative network on two
stress distribution datasets under multiple metrics, we demonstrate that our
model can predict more accurate high-resolution stress distributions than a
baseline convolutional neural network model, given various and complex cases of
geometry, load and boundary conditions.
Related papers
- Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - Physics Informed Neural Network for Dynamic Stress Prediction [10.588266927411434]
A Physics Informed Neural Network (PINN) model is proposed to predict the entire sequence of stress distribution based on Finite Element simulations.
Using automatic differentiation, we embed a PDE into a deep neural network's loss function to incorporate information from measurements and PDEs.
The PINN-Stress model can predict the sequence of stress distribution in almost real-time and can generalize better than the model without PINN.
arXiv Detail & Related papers (2022-11-28T16:03:21Z) - Learning Physical Dynamics with Subequivariant Graph Neural Networks [99.41677381754678]
Graph Neural Networks (GNNs) have become a prevailing tool for learning physical dynamics.
Physical laws abide by symmetry, which is a vital inductive bias accounting for model generalization.
Our model achieves on average over 3% enhancement in contact prediction accuracy across 8 scenarios on Physion and 2X lower rollout MSE on RigidFall.
arXiv Detail & Related papers (2022-10-13T10:00:30Z) - Physics-informed machine learning with differentiable programming for
heterogeneous underground reservoir pressure management [64.17887333976593]
Avoiding over-pressurization in subsurface reservoirs is critical for applications like CO2 sequestration and wastewater injection.
Managing the pressures by controlling injection/extraction are challenging because of complex heterogeneity in the subsurface.
We use differentiable programming with a full-physics model and machine learning to determine the fluid extraction rates that prevent over-pressurization.
arXiv Detail & Related papers (2022-06-21T20:38:13Z) - Multi-scale Feature Learning Dynamics: Insights for Double Descent [71.91871020059857]
We study the phenomenon of "double descent" of the generalization error.
We find that double descent can be attributed to distinct features being learned at different scales.
arXiv Detail & Related papers (2021-12-06T18:17:08Z) - LCS: Learning Compressible Subspaces for Adaptive Network Compression at
Inference Time [57.52251547365967]
We propose a method for training a "compressible subspace" of neural networks that contains a fine-grained spectrum of models.
We present results for achieving arbitrarily fine-grained accuracy-efficiency trade-offs at inference time for structured and unstructured sparsity.
Our algorithm extends to quantization at variable bit widths, achieving accuracy on par with individually trained networks.
arXiv Detail & Related papers (2021-10-08T17:03:34Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Tensor-Train Networks for Learning Predictive Modeling of
Multidimensional Data [0.0]
A promising strategy is based on tensor networks, which have been very successful in physical and chemical applications.
We show that the weights of a multidimensional regression model can be learned by means of tensor networks with the aim of performing a powerful compact representation.
An algorithm based on alternating least squares has been proposed for approximating the weights in TT-format with a reduction of computational power.
arXiv Detail & Related papers (2021-01-22T16:14:38Z) - Difference-Based Deep Learning Framework for Stress Predictions in
Heterogeneous Media [0.0]
We utilize Deep Learning for developing a set of novel Difference-based Neural Network (DiNN) frameworks to determine stress distribution in heterogeneous media.
We focus on highlighting the differences in stress distribution between different input samples for improving the accuracy of prediction in heterogeneous media.
Results show that the DiNN structures significantly enhance the accuracy of stress prediction compared to existing structures.
arXiv Detail & Related papers (2020-07-01T00:18:14Z) - Deep Learning of Dynamic Subsurface Flow via Theory-guided Generative
Adversarial Network [0.0]
Theory-guided generative adversarial network (TgGAN) is proposed to solve dynamic partial differential equations (PDEs)
TgGAN is proposed for dynamic subsurface flow with heterogeneous model parameters.
Numerical results demonstrate that the TgGAN model is robust and reliable for deep learning of dynamic PDEs.
arXiv Detail & Related papers (2020-06-02T02:53:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.