Modeling Silicon-Photonic Neural Networks under Uncertainties
- URL: http://arxiv.org/abs/2012.10594v1
- Date: Sat, 19 Dec 2020 04:41:26 GMT
- Title: Modeling Silicon-Photonic Neural Networks under Uncertainties
- Authors: Sanmitra Banerjee, Mahdi Nikdast, and Krishnendu Chakrabarty
- Abstract summary: Silicon-photonic neural networks (SPNNs) offer substantial improvements in computing speed and energy efficiency compared to their digital electronic counterparts.
However, the energy efficiency and accuracy of SPNNs are highly impacted by uncertainties that arise from fabrication-process and thermal variations.
We present the first comprehensive and hierarchical study on the impact of random uncertainties on the classification accuracy of a Mach-Zehnder Interferometer (MZI)-based SPNN.
- Score: 4.205518884494758
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Silicon-photonic neural networks (SPNNs) offer substantial improvements in
computing speed and energy efficiency compared to their digital electronic
counterparts. However, the energy efficiency and accuracy of SPNNs are highly
impacted by uncertainties that arise from fabrication-process and thermal
variations. In this paper, we present the first comprehensive and hierarchical
study on the impact of random uncertainties on the classification accuracy of a
Mach-Zehnder Interferometer (MZI)-based SPNN. We show that such impact can vary
based on both the location and characteristics (e.g., tuned phase angles) of a
non-ideal silicon-photonic device. Simulation results show that in an SPNN with
two hidden layers and 1374 tunable-thermal-phase shifters, random uncertainties
even in mature fabrication processes can lead to a catastrophic 70% accuracy
loss.
Related papers
- General Self-Prediction Enhancement for Spiking Neurons [71.01912385372577]
Spiking Neural Networks (SNNs) are highly energy-efficient due to event-driven, sparse computation, but their training is challenged by spike non-differentiability and trade-offs among performance, efficiency, and biological plausibility.<n>We propose a self-prediction enhanced spiking neuron method that generates an internal prediction current from its input-output history to modulate membrane potential.<n>This design offers dual advantages, it creates a continuous gradient path that alleviates vanishing gradients and boosts training stability and accuracy, while also aligning with biological principles, which resembles distal dendritic modulation and error-driven synaptic plasticity.
arXiv Detail & Related papers (2026-01-29T15:08:48Z) - Modeling Membrane Degradation in PEM Electrolyzers with Physics-Informed Neural Networks [45.32169712547367]
Proton exchange membrane (PEM) electrolyzers are pivotal for sustainable hydrogen production.<n>Their long-term performance is hindered by membrane degradation, which poses reliability and safety challenges.<n>Traditional physics-based models have been developed, offering interpretability but requiring numerous parameters that are often difficult to measure and calibrate.<n>This study presents the first application of Physics-Informed Neural Networks (PINNs) to model membrane degradation in PEM electrolyzers.
arXiv Detail & Related papers (2025-06-19T15:46:49Z) - Energy-Efficient Digital Design: A Comparative Study of Event-Driven and Clock-Driven Spiking Neurons [42.170149806080204]
This paper presents a comprehensive evaluation of Spiking Neural Network (SNN) neuron models for hardware acceleration.<n>We begin our investigation in software, rapidly prototyping and testing various SNN models based on different variants of the Leaky Integrate and Fire (LIF) neuron.<n>Our subsequent hardware phase, implemented on FPGA, validates the simulation findings and offers practical insights into design trade offs.
arXiv Detail & Related papers (2025-06-16T09:10:19Z) - Combining Aggregated Attention and Transformer Architecture for Accurate and Efficient Performance of Spiking Neural Networks [44.145870290310356]
Spiking Neural Networks have attracted significant attention in recent years due to their distinctive low-power characteristics.<n>Transformers models, known for their powerful self-attention mechanisms and parallel processing capabilities, have demonstrated exceptional performance across various domains.<n>Despite the significant advantages of both SNNs and Transformers, directly combining the low-power benefits of SNNs with the high performance of Transformers remains challenging.
arXiv Detail & Related papers (2024-12-18T07:07:38Z) - Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.
A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.
The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - Design and simulation of a transmon qubit chip for Axion detection [103.69390312201169]
Device based on superconducting qubits has been successfully applied in detecting few-GHz single photons via Quantum Non-Demolition measurement (QND)
In this study, we present Qub-IT's status towards the realization of its first superconducting qubit device.
arXiv Detail & Related papers (2023-10-08T17:11:42Z) - Photonic Accelerators for Image Segmentation in Autonomous Driving and
Defect Detection [34.864059478265055]
Photonic computing promises faster and more energy-efficient deep neural network (DNN) inference than traditional digital hardware.
We show that certain segmentation models exhibit negligible loss in accuracy (compared to digital float32 models) when executed on photonic accelerators.
We discuss the challenges and potential optimizations that can help improve the application of photonic accelerators to such computer vision tasks.
arXiv Detail & Related papers (2023-09-28T18:22:41Z) - Classification robustness to common optical aberrations [64.08840063305313]
This paper proposes OpticsBench, a benchmark for investigating robustness to realistic, practically relevant optical blur effects.
Experiments on ImageNet show that for a variety of different pre-trained DNNs, the performance varies strongly compared to disk-shaped kernels.
We show on ImageNet-100 with OpticsAugment that can be increased by using optical kernels as data augmentation.
arXiv Detail & Related papers (2023-08-29T08:36:00Z) - Analysis of Optical Loss and Crosstalk Noise in MZI-based Coherent
Photonic Neural Networks [8.930237478906266]
silicon-photonic-based neural network (SP-NN) accelerators have emerged as a promising alternative to electronic accelerators.
In this paper, we comprehensively model the optical loss and crosstalk noise using a bottom-up approach.
We show a high power penalty and a catastrophic inferencing accuracy drop of up to 84% for SP-NNs of different scales.
arXiv Detail & Related papers (2023-08-07T02:01:18Z) - Accurate melting point prediction through autonomous physics-informed
learning [52.217497897835344]
We present an algorithm for computing melting points by autonomously learning from coexistence simulations in the NPT ensemble.
We demonstrate how incorporating physical models of the solid-liquid coexistence evolution enhances the algorithm's accuracy and enables optimal decision-making.
arXiv Detail & Related papers (2023-06-23T07:53:09Z) - Database of semiconductor point-defect properties for applications in
quantum technologies [54.17256385566032]
We have calculated over 50,000 point defects in various semiconductors including diamond, silicon carbide, and silicon.
We characterize the relevant optical and electronic properties of these defects, including formation energies, spin characteristics, transition dipole moments, zero-phonon lines.
We find 2331 composite defects which are stable in intrinsic silicon, which are then filtered to identify many new optically bright telecom spin qubit candidates and single-photon sources.
arXiv Detail & Related papers (2023-03-28T19:51:08Z) - The Hardware Impact of Quantization and Pruning for Weights in Spiking
Neural Networks [0.368986335765876]
quantization and pruning of parameters can both compress the model size, reduce memory footprints, and facilitate low-latency execution.
We study various combinations of pruning and quantization in isolation, cumulatively, and simultaneously to a state-of-the-art SNN targeting gesture recognition.
We show that this state-of-the-art model is amenable to aggressive parameter quantization, not suffering from any loss in accuracy down to ternary weights.
arXiv Detail & Related papers (2023-02-08T16:25:20Z) - Optical Neural Ordinary Differential Equations [44.97261923694945]
We propose the optical neural ordinary differential equations (ON-ODE) architecture that parameterizes the continuous dynamics of hidden layers with optical ODE solvers.
The ON-ODE comprises the PNNs followed by the photonic integrator and optical feedback loop, which can be configured to represent residual neural networks (ResNet) and recurrent neural networks with effectively reduced chip area occupancy.
arXiv Detail & Related papers (2022-09-26T04:04:02Z) - Characterizing Coherent Integrated Photonic Neural Networks under
Imperfections [7.387054116520716]
Integrated photonic neural networks (IPNNs) are emerging as promising successors to conventional electronic AI accelerators.
In this paper, we systematically characterize the impact of uncertainties and imprecisions in IPNNs using a bottom-up approach.
arXiv Detail & Related papers (2022-07-22T01:33:19Z) - Characterization and Optimization of Integrated Silicon-Photonic Neural
Networks under Fabrication-Process Variations [8.690877625458324]
Silicon-photonic neural networks (SPNNs) have emerged as promising successors to electronic artificial intelligence (AI) accelerators.
The underlying silicon photonic devices in SPNNs are sensitive to inevitable fabrication-process variations (FPVs) stemming from optical lithography imperfections.
We propose a novel variation-aware, design-time optimization solution to improve MZI tolerance to different FPVs in SPNNs.
arXiv Detail & Related papers (2022-04-19T23:03:36Z) - LoCI: An Analysis of the Impact of Optical Loss and Crosstalk Noise in
Integrated Silicon-Photonic Neural Networks [8.930237478906266]
Integrated silicon-photonic neural networks (SP-NNs) promise higher speed and energy efficiency for emerging artificial-intelligence applications.
This paper presents the first comprehensive and systematic optical loss and crosstalk modeling framework for SP-NNs.
arXiv Detail & Related papers (2022-04-08T04:22:39Z) - Enhanced physics-constrained deep neural networks for modeling vanadium
redox flow battery [62.997667081978825]
We propose an enhanced version of the physics-constrained deep neural network (PCDNN) approach to provide high-accuracy voltage predictions.
The ePCDNN can accurately capture the voltage response throughout the charge--discharge cycle, including the tail region of the voltage discharge curve.
arXiv Detail & Related papers (2022-03-03T19:56:24Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.