An unsupervised learning approach to solving heat equations on chip
based on Auto Encoder and Image Gradient
- URL: http://arxiv.org/abs/2007.09684v1
- Date: Sun, 19 Jul 2020 15:01:01 GMT
- Title: An unsupervised learning approach to solving heat equations on chip
based on Auto Encoder and Image Gradient
- Authors: Haiyang He, Jay Pathak
- Abstract summary: Solving heat transfer equations on chip becomes very critical in the upcoming 5G and AI chip-package-systems.
Data driven methods are data hungry, to address this, Physics Informed Neural Networks (PINN) have been proposed.
This paper investigates an unsupervised learning approach for solving heat transfer equations on chip without using data.
- Score: 0.43512163406551996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Solving heat transfer equations on chip becomes very critical in the upcoming
5G and AI chip-package-systems. However, batches of simulations have to be
performed for data driven supervised machine learning models. Data driven
methods are data hungry, to address this, Physics Informed Neural Networks
(PINN) have been proposed. However, vanilla PINN models solve one fixed heat
equation at a time, so the models have to be retrained for heat equations with
different source terms. Additionally, issues related to multi-objective
optimization have to be resolved while using PINN to minimize the PDE residual,
satisfy boundary conditions and fit the observed data etc. Therefore, this
paper investigates an unsupervised learning approach for solving heat transfer
equations on chip without using solution data and generalizing the trained
network for predicting solutions for heat equations with unseen source terms.
Specifically, a hybrid framework of Auto Encoder (AE) and Image Gradient (IG)
based network is designed. The AE is used to encode different source terms of
the heat equations. The IG based network implements a second order central
difference algorithm for structured grids and minimizes the PDE residual. The
effectiveness of the designed network is evaluated by solving heat equations
for various use cases. It is proved that with limited number of source terms to
train the AE network, the framework can not only solve the given heat transfer
problems with a single training process, but also make reasonable predictions
for unseen cases (heat equations with new source terms) without retraining.
Related papers
- Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - Mixed formulation of physics-informed neural networks for
thermo-mechanically coupled systems and heterogeneous domains [0.0]
Physics-informed neural networks (PINNs) are a new tool for solving boundary value problems.
Recent investigations have shown that when designing loss functions for many engineering problems, using first-order derivatives and combining equations from both strong and weak forms can lead to much better accuracy.
In this work, we propose applying the mixed formulation to solve multi-physical problems, specifically a stationary thermo-mechanically coupled system of equations.
arXiv Detail & Related papers (2023-02-09T21:56:59Z) - Deep Physics Corrector: A physics enhanced deep learning architecture
for solving stochastic differential equations [0.0]
We propose a novel gray-box modeling algorithm for physical systems governed by differential equations (SDE)
The proposed approach, referred to as the Deep Physics Corrector (DPC), blends approximate physics represented in terms of SDE with deep neural network (DNN)
We illustrate the performance of the proposed DPC on four benchmark examples from the literature.
arXiv Detail & Related papers (2022-09-20T14:30:07Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Learning to Solve PDE-constrained Inverse Problems with Graph Networks [51.89325993156204]
In many application domains across science and engineering, we are interested in solving inverse problems with constraints defined by a partial differential equation (PDE)
Here we explore GNNs to solve such PDE-constrained inverse problems.
We demonstrate computational speedups of up to 90x using GNNs compared to principled solvers.
arXiv Detail & Related papers (2022-06-01T18:48:01Z) - Heat Conduction Plate Layout Optimization using Physics-driven
Convolutional Neural Networks [14.198900757461555]
The layout optimization of the heat conduction is essential during design in engineering, especially for sensible thermal products.
Data-driven approaches are used to train a surrogate model as a mapping between the prescribed external loads and various geometry.
This paper proposes a Physics-driven Convolutional Neural Networks (PD-CNN) method to infer the physical field solutions for varied loading cases.
arXiv Detail & Related papers (2022-01-21T10:43:57Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - Joint inference and input optimization in equilibrium networks [68.63726855991052]
deep equilibrium model is a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer.
We show that there is a natural synergy between these two settings.
We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
arXiv Detail & Related papers (2021-11-25T19:59:33Z) - Physics-informed Convolutional Neural Networks for Temperature Field
Prediction of Heat Source Layout without Labeled Data [9.71214034180507]
This paper develops a physics-informed convolutional neural network (CNN) for the thermal simulation surrogate.
The network can learn a mapping from heat source layout to the steady-state temperature field without labeled data, which equals solving an entire family of partial difference equations (PDEs)
arXiv Detail & Related papers (2021-09-26T03:24:23Z) - Physics-Informed Neural Network for Modelling the Thermochemical Curing
Process of Composite-Tool Systems During Manufacture [11.252083314920108]
We present a PINN to simulate thermochemical evolution of a composite material on a tool undergoing cure in an autoclave.
We train the PINN with a technique that automatically adapts the weights on the loss terms corresponding to PDE, boundary, interface, and initial conditions.
The performance of the proposed PINN is demonstrated in multiple scenarios with different material thicknesses and thermal boundary conditions.
arXiv Detail & Related papers (2020-11-27T00:56:15Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.