Comparison of Models for Training Optical Matrix Multipliers in
Neuromorphic PICs
- URL: http://arxiv.org/abs/2111.14787v1
- Date: Tue, 23 Nov 2021 12:15:21 GMT
- Title: Comparison of Models for Training Optical Matrix Multipliers in
Neuromorphic PICs
- Authors: Ali Cem, Siqi Yan, Uiara Celine de Moura, Yunhong Ding, Darko Zibar
and Francesco Da Ros
- Abstract summary: We compare simple physics-based vs. data-driven neural-network-based models for offline training of programmable photonic chips.
The neural-network model outperforms physics-based models for a chip with thermal crosstalk, yielding increased testing accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We experimentally compare simple physics-based vs. data-driven
neural-network-based models for offline training of programmable photonic chips
using Mach-Zehnder interferometer meshes. The neural-network model outperforms
physics-based models for a chip with thermal crosstalk, yielding increased
testing accuracy.
Related papers
- Applying generative neural networks for fast simulations of the ALICE (CERN) experiment [0.0]
This thesis investigates the application of state-of-the-art advances in generative neural networks for fast simulation of the Zero Degree Calorimeter (ZDC) neutron detector at CERN.
Traditional simulation methods using the GEANT Monte Carlo toolkit, while accurate, are computationally demanding.
The thesis provides a comprehensive literature review on the application of neural networks in computer vision, fast simulations using machine learning, and generative neural networks in high-energy physics.
arXiv Detail & Related papers (2024-07-10T17:08:59Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - A Multi-Grained Symmetric Differential Equation Model for Learning
Protein-Ligand Binding Dynamics [74.93549765488103]
In drug discovery, molecular dynamics simulation provides a powerful tool for predicting binding affinities, estimating transport properties, and exploring pocket sites.
We propose NeuralMD, the first machine learning surrogate that can facilitate numerical MD and provide accurate simulations in protein-ligand binding.
We show the efficiency and effectiveness of NeuralMD, with a 2000$times$ speedup over standard numerical MD simulation and outperforming all other ML approaches by up to 80% under the stability metric.
arXiv Detail & Related papers (2024-01-26T09:35:17Z) - Addressing Data Scarcity in Optical Matrix Multiplier Modeling Using
Transfer Learning [0.0]
We present and experimentally evaluate using transfer learning to address experimental data scarcity.
Our approach involves pre-training the model using synthetic data generated from a less accurate analytical model.
We achieve 1 dB root-mean-square error on the matrix weights implemented by a 3x3 photonic chip while using only 25% of the available data.
arXiv Detail & Related papers (2023-08-10T07:33:00Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Data-efficient Modeling of Optical Matrix Multipliers Using Transfer
Learning [0.0]
We demonstrate transfer learning-assisted neural network models for optical matrix multipliers with scarce measurement data.
Our approach uses 10% of experimental data needed for best performance and outperforms analytical models for a Mach-Zehnder interferometer mesh.
arXiv Detail & Related papers (2022-11-29T09:22:42Z) - Continual learning autoencoder training for a particle-in-cell
simulation via streaming [52.77024349608834]
upcoming exascale era will provide a new generation of physics simulations with high resolution.
These simulations will have a high resolution, which will impact the training of machine learning models since storing a high amount of simulation data on disk is nearly impossible.
This work presents an approach that trains a neural network concurrently to a running simulation without data on a disk.
arXiv Detail & Related papers (2022-11-09T09:55:14Z) - An Adversarial Active Sampling-based Data Augmentation Framework for
Manufacturable Chip Design [55.62660894625669]
Lithography modeling is a crucial problem in chip design to ensure a chip design mask is manufacturable.
Recent developments in machine learning have provided alternative solutions in replacing the time-consuming lithography simulations with deep neural networks.
We propose a litho-aware data augmentation framework to resolve the dilemma of limited data and improve the machine learning model performance.
arXiv Detail & Related papers (2022-10-27T20:53:39Z) - Data-driven Modeling of Mach-Zehnder Interferometer-based Optical Matrix
Multipliers [0.0]
Photonic integrated circuits are facilitating the development of optical neural networks.
We describe both simple analytical models and data-driven models for offline training of optical matrix multipliers.
The neural network-based models outperform the simple physics-based models in terms of prediction error.
arXiv Detail & Related papers (2022-10-17T15:19:26Z) - Real-time Neural-MPC: Deep Learning Model Predictive Control for
Quadrotors and Agile Robotic Platforms [59.03426963238452]
We present Real-time Neural MPC, a framework to efficiently integrate large, complex neural network architectures as dynamics models within a model-predictive control pipeline.
We show the feasibility of our framework on real-world problems by reducing the positional tracking error by up to 82% when compared to state-of-the-art MPC approaches without neural network dynamics.
arXiv Detail & Related papers (2022-03-15T09:38:15Z) - PhysiNet: A Combination of Physics-based Model and Neural Network Model
for Digital Twins [0.5076419064097732]
This paper proposes a model that combines the physics-based model and the neural network model to improve the prediction accuracy for the whole life cycle of a system.
Experiments showed that the proposed hybrid model outperformed both the physics-based model and the neural network model.
arXiv Detail & Related papers (2021-06-28T15:13:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.