Real-Time FJ/MAC PDE Solvers via Tensorized, Back-Propagation-Free
Optical PINN Training
- URL: http://arxiv.org/abs/2401.00413v2
- Date: Thu, 4 Jan 2024 06:25:16 GMT
- Title: Real-Time FJ/MAC PDE Solvers via Tensorized, Back-Propagation-Free
Optical PINN Training
- Authors: Yequan Zhao, Xian Xiao, Xinling Yu, Ziyue Liu, Zhixiong Chen, Geza
Kurczveil, Raymond G. Beausoleil, Zheng Zhang
- Abstract summary: This paper develops an on-chip training framework for physics-informed neural networks (PINNs)
It aims to solve high-dimensional PDEs with fJ/MAC photonic power consumption and ultra-low latency.
This is the first real-size optical PINN training framework that can be applied to solve high-dimensional PDEs.
- Score: 5.809283001227614
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Solving partial differential equations (PDEs) numerically often requires huge
computing time, energy cost, and hardware resources in practical applications.
This has limited their applications in many scenarios (e.g., autonomous
systems, supersonic flows) that have a limited energy budget and require near
real-time response. Leveraging optical computing, this paper develops an
on-chip training framework for physics-informed neural networks (PINNs), aiming
to solve high-dimensional PDEs with fJ/MAC photonic power consumption and
ultra-low latency. Despite the ultra-high speed of optical neural networks,
training a PINN on an optical chip is hard due to (1) the large size of
photonic devices, and (2) the lack of scalable optical memory devices to store
the intermediate results of back-propagation (BP). To enable realistic optical
PINN training, this paper presents a scalable method to avoid the BP process.
We also employ a tensor-compressed approach to improve the convergence and
scalability of our optical PINN training. This training framework is designed
with tensorized optical neural networks (TONN) for scalable inference
acceleration and MZI phase-domain tuning for \textit{in-situ} optimization. Our
simulation results of a 20-dim HJB PDE show that our photonic accelerator can
reduce the number of MZIs by a factor of $1.17\times 10^3$, with only $1.36$ J
and $1.15$ s to solve this equation. This is the first real-size optical PINN
training framework that can be applied to solve high-dimensional PDEs.
Related papers
- Speed Limits for Deep Learning [67.69149326107103]
Recent advancement in thermodynamics allows bounding the speed at which one can go from the initial weight distribution to the final distribution of the fully trained network.
We provide analytical expressions for these speed limits for linear and linearizable neural networks.
Remarkably, given some plausible scaling assumptions on the NTK spectra and spectral decomposition of the labels -- learning is optimal in a scaling sense.
arXiv Detail & Related papers (2023-07-27T06:59:46Z) - Physics-aware Roughness Optimization for Diffractive Optical Neural
Networks [15.397285424104469]
diffractive optical neural networks (DONNs) have shown promising advantages over conventional deep neural networks.
We propose a physics-aware diffractive optical neural network training framework to reduce the performance difference between numerical modeling and practical deployment.
arXiv Detail & Related papers (2023-04-04T03:19:36Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - TT-PINN: A Tensor-Compressed Neural PDE Solver for Edge Computing [7.429526302331948]
Physics-informed neural networks (PINNs) have been increasingly employed due to their capability of modeling complex physics systems.
This paper proposes an end-to-end compressed PINN based on Helmholtz-Train decomposition.
arXiv Detail & Related papers (2022-07-04T23:56:27Z) - Auto-PINN: Understanding and Optimizing Physics-Informed Neural
Architecture [77.59766598165551]
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
Here, we propose Auto-PINN, which employs Neural Architecture Search (NAS) techniques to PINN design.
A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs.
arXiv Detail & Related papers (2022-05-27T03:24:31Z) - Single-Shot Optical Neural Network [55.41644538483948]
'Weight-stationary' analog optical and electronic hardware has been proposed to reduce the compute resources required by deep neural networks.
We present a scalable, single-shot-per-layer weight-stationary optical processor.
arXiv Detail & Related papers (2022-05-18T17:49:49Z) - All-optical graph representation learning using integrated diffractive
photonic computing units [51.15389025760809]
Photonic neural networks perform brain-inspired computations using photons instead of electrons.
We propose an all-optical graph representation learning architecture, termed diffractive graph neural network (DGNN)
We demonstrate the use of DGNN extracted features for node and graph-level classification tasks with benchmark databases and achieve superior performance.
arXiv Detail & Related papers (2022-04-23T02:29:48Z) - Silicon photonic subspace neural chip for hardware-efficient deep
learning [11.374005508708995]
optical neural network (ONN) is a promising candidate for next-generation neurocomputing.
We devise a hardware-efficient photonic subspace neural network architecture.
We experimentally demonstrate our PSNN on a butterfly-style programmable silicon photonic integrated circuit.
arXiv Detail & Related papers (2021-11-11T06:34:05Z) - L2ight: Enabling On-Chip Learning for Optical Neural Networks via
Efficient in-situ Subspace Optimization [10.005026783940682]
Silicon-photonics-based optical neural network (ONN) is a promising hardware platform that could represent a paradigm shift in efficient AI.
In this work, we propose a closed-loop ONN on-chip learning framework L2ight to enable scalable ONN mapping and efficient in-situ learning.
arXiv Detail & Related papers (2021-10-27T22:53:47Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.