Unitary Learning for Deep Diffractive Neural Network
- URL: http://arxiv.org/abs/2009.08935v1
- Date: Mon, 17 Aug 2020 07:16:09 GMT
- Title: Unitary Learning for Deep Diffractive Neural Network
- Authors: Yong-Liang Xiao
- Abstract summary: We present a unitary learning protocol on deep diffractive neural network.
The temporal-space evolution characteristic in unitary learning is formulated and elucidated.
As a preliminary application, deep diffractive neural network with unitary learning is tentatively implemented on the 2D classification and verification tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Realization of deep learning with coherent diffraction has achieved
remarkable development nowadays, which benefits on the fact that matrix
multiplication can be optically executed in parallel as well as with little
power consumption. Coherent optical field propagated in the form of
complex-value entity can be manipulated into a task-oriented output with
statistical inference. In this paper, we present a unitary learning protocol on
deep diffractive neural network, meeting the physical unitary prior in coherent
diffraction. Unitary learning is a backpropagation serving to unitary weights
update through the gradient translation between Euclidean and Riemannian space.
The temporal-space evolution characteristic in unitary learning is formulated
and elucidated. Particularly a compatible condition on how to select the
nonlinear activations in complex space is unveiled, encapsulating the
fundamental sigmoid, tanh and quasi-ReLu in complex space. As a preliminary
application, deep diffractive neural network with unitary learning is
tentatively implemented on the 2D classification and verification tasks.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - The Influence of Learning Rule on Representation Dynamics in Wide Neural
Networks [18.27510863075184]
We analyze infinite-width deep gradient networks trained with feedback alignment (FA), direct feedback alignment (DFA), and error modulated Hebbian learning (Hebb)
We show that, for each of these learning rules, the evolution of the output function at infinite width is governed by a time varying effective neural tangent kernel (eNTK)
In the lazy training limit, this eNTK is static and does not evolve, while in the rich mean-field regime this kernel's evolution can be determined self-consistently with dynamical mean field theory (DMFT)
arXiv Detail & Related papers (2022-10-05T11:33:40Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Multi-scale Feature Learning Dynamics: Insights for Double Descent [71.91871020059857]
We study the phenomenon of "double descent" of the generalization error.
We find that double descent can be attributed to distinct features being learned at different scales.
arXiv Detail & Related papers (2021-12-06T18:17:08Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Geometry Perspective Of Estimating Learning Capability Of Neural
Networks [0.0]
The paper considers a broad class of neural networks with generalized architecture performing simple least square regression with gradient descent (SGD)
The relationship between the generalization capability with the stability of the neural network has also been discussed.
By correlating the principles of high-energy physics with the learning theory of neural networks, the paper establishes a variant of the Complexity-Action conjecture from an artificial neural network perspective.
arXiv Detail & Related papers (2020-11-03T12:03:19Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Compatible Learning for Deep Photonic Neural Network [0.0]
Photonic neural network has a significant potential for prediction-oriented tasks.
We develop a compatible learning protocol in complex space, of which nonlinear activation could be selected efficiently.
arXiv Detail & Related papers (2020-03-14T13:21:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.