Compatible Learning for Deep Photonic Neural Network
- URL: http://arxiv.org/abs/2003.08360v1
- Date: Sat, 14 Mar 2020 13:21:07 GMT
- Title: Compatible Learning for Deep Photonic Neural Network
- Authors: Yong-Liang Xiao, Rongguang Liang, Jianxin Zhong, Xianyu Su, Zhisheng
You
- Abstract summary: Photonic neural network has a significant potential for prediction-oriented tasks.
We develop a compatible learning protocol in complex space, of which nonlinear activation could be selected efficiently.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Realization of deep learning with coherent optical field has attracted
remarkably attentions presently, which benefits on the fact that optical matrix
manipulation can be executed at speed of light with inherent parallel
computation as well as low latency. Photonic neural network has a significant
potential for prediction-oriented tasks. Yet, real-value Backpropagation
behaves somewhat intractably for coherent photonic intelligent training. We
develop a compatible learning protocol in complex space, of which nonlinear
activation could be selected efficiently depending on the unveiled compatible
condition. Compatibility indicates that matrix representation in complex space
covers its real counterpart, which could enable a single channel mingled
training in real and complex space as a unified model. The phase logical XOR
gate with Mach-Zehnder interferometers and diffractive neural network with
optical modulation mechanism, implementing intelligent weight learned from
compatible learning, are presented to prove the availability. Compatible
learning opens an envisaged window for deep photonic neural network.
Related papers
- Optical training of large-scale Transformers and deep neural networks with direct feedback alignment [48.90869997343841]
We experimentally implement a versatile and scalable training algorithm, called direct feedback alignment, on a hybrid electronic-photonic platform.
An optical processing unit performs large-scale random matrix multiplications, which is the central operation of this algorithm, at speeds up to 1500 TeraOps.
We study the compute scaling of our hybrid optical approach, and demonstrate a potential advantage for ultra-deep and wide neural networks.
arXiv Detail & Related papers (2024-09-01T12:48:47Z) - Coherence Awareness in Diffractive Neural Networks [21.264497139730473]
We show that in diffractive networks the degree of spatial coherence has a dramatic effect.
In particular, we show that when the spatial coherence length on the object is comparable to the minimal feature size preserved by the optical system, neither the incoherent nor the coherent extremes serve as acceptable approximations.
arXiv Detail & Related papers (2024-08-13T07:19:40Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - Scalable Nanophotonic-Electronic Spiking Neural Networks [3.9918594409417576]
Spiking neural networks (SNN) provide a new computational paradigm capable of highly parallelized, real-time processing.
Photonic devices are ideal for the design of high-bandwidth, parallel architectures matching the SNN computational paradigm.
Co-integrated CMOS and SiPh technologies are well-suited to the design of scalable SNN computing architectures.
arXiv Detail & Related papers (2022-08-28T06:10:06Z) - Hybrid training of optical neural networks [1.0323063834827415]
Optical neural networks are emerging as a promising type of machine learning hardware.
These networks are mainly developed to perform optical inference after in silico training on digital simulators.
We show that hybrid training of optical neural networks can be applied to a wide variety of optical neural networks.
arXiv Detail & Related papers (2022-03-20T21:16:42Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Photonic neural field on a silicon chip: large-scale, high-speed
neuro-inspired computing and sensing [0.0]
Photonic neural networks have significant potential for high-speed neural processing with low latency and ultralow energy consumption.
We propose the concept of a photonic neural field and implement it experimentally on a silicon chip to realize highly scalable neuro-inspired computing.
In this study, we use the on-chip photonic neural field as a reservoir of information and demonstrate a high-speed chaotic time-series prediction with low errors.
arXiv Detail & Related papers (2021-05-22T09:28:51Z) - Rapid characterisation of linear-optical networks via PhaseLift [51.03305009278831]
Integrated photonics offers great phase-stability and can rely on the large scale manufacturability provided by the semiconductor industry.
New devices, based on such optical circuits, hold the promise of faster and energy-efficient computations in machine learning applications.
We present a novel technique to reconstruct the transfer matrix of linear optical networks.
arXiv Detail & Related papers (2020-10-01T16:04:22Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Unitary Learning for Deep Diffractive Neural Network [0.0]
We present a unitary learning protocol on deep diffractive neural network.
The temporal-space evolution characteristic in unitary learning is formulated and elucidated.
As a preliminary application, deep diffractive neural network with unitary learning is tentatively implemented on the 2D classification and verification tasks.
arXiv Detail & Related papers (2020-08-17T07:16:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.