Neuromorphic Quantum Neural Networks with Tunnel-Diode Activation Functions
- URL: http://arxiv.org/abs/2503.04978v1
- Date: Thu, 06 Mar 2025 21:14:23 GMT
- Title: Neuromorphic Quantum Neural Networks with Tunnel-Diode Activation Functions
- Authors: Jake McNaughton, A. H. Abbas, Ivan S. Maksymov,
- Abstract summary: Tunnel diodes are well-known electronic components that utilise the physical effect of quantum tunnelling (QT)<n>We propose using the current voltage characteristic of a tunnel diode as a novel, physics-based activation function for neural networks.<n>We demonstrate that the tunnel-diode activation function (TDAF) outperforms traditional activation functions in terms of accuracy and loss during both training and evaluation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The mathematical complexity and high dimensionality of neural networks hinder the training and deployment of machine learning (ML) systems while also requiring substantial computational resources. This fundamental limitation drives ML research, particularly in the exploration of alternative neural network architectures that integrate novel building blocks, such as advanced activation functions. Tunnel diodes are well-known electronic components that utilise the physical effect of quantum tunnelling (QT). Here, we propose using the current voltage characteristic of a tunnel diode as a novel, physics-based activation function for neural networks. We demonstrate that the tunnel-diode activation function (TDAF) outperforms traditional activation functions in terms of accuracy and loss during both training and evaluation. We also highlight its potential for implementation in electronic circuits suited to developing neuromorphic, quantum-inspired AI systems capable of operating in environments not suitable for qubit-based quantum computing hardware.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Quantum-inspired activation functions and quantum Chebyshev-polynomial network [6.09437748873686]
We investigate functional expressibility of quantum circuits integrated within a convolutional neural network (CNN)
We develop a hybrid quantum Chebyshev-polynomial network (QCPN) based on the properties of quantum activation functions.
Our findings suggest that quantum-inspired activation functions can reduce model depth while maintaining high learning capability.
arXiv Detail & Related papers (2024-04-08T23:08:38Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - Realization of a quantum neural network using repeat-until-success
circuits in a superconducting quantum processor [0.0]
In this paper, we use repeat-until-success circuits enabled by real-time control-flow feedback to realize quantum neurons with non-linear activation functions.
As an example, we construct a minimal feedforward quantum neural network capable of learning all 2-to-1-bit Boolean functions.
This model is shown to perform non-linear classification and effectively learns from multiple copies of a single training state consisting of the maximal superposition of all inputs.
arXiv Detail & Related papers (2022-12-21T03:26:32Z) - Learning Trajectories of Hamiltonian Systems with Neural Networks [81.38804205212425]
We propose to enhance Hamiltonian neural networks with an estimation of a continuous-time trajectory of the modeled system.
We demonstrate that the proposed integration scheme works well for HNNs, especially with low sampling rates, noisy and irregular observations.
arXiv Detail & Related papers (2022-04-11T13:25:45Z) - Quantum activation functions for quantum neural networks [0.0]
We show how to approximate any analytic function to any required accuracy without the need to measure the states encoding the information.
Our results recast the science of artificial neural networks in the architecture of gate-model quantum computers.
arXiv Detail & Related papers (2022-01-10T23:55:49Z) - The Hintons in your Neural Network: a Quantum Field Theory View of Deep
Learning [84.33745072274942]
We show how to represent linear and non-linear layers as unitary quantum gates, and interpret the fundamental excitations of the quantum model as particles.
On top of opening a new perspective and techniques for studying neural networks, the quantum formulation is well suited for optical quantum computing.
arXiv Detail & Related papers (2021-03-08T17:24:29Z) - Quantum neural networks with deep residual learning [29.929891641757273]
In this paper, a novel quantum neural network with deep residual learning (ResQNN) is proposed.
Our ResQNN is able to learn an unknown unitary and get remarkable performance.
arXiv Detail & Related papers (2020-12-14T18:11:07Z) - A superconducting nanowire spiking element for neural networks [0.0]
Key to the success of largescale neural networks is a power-efficient spiking element that is scalable and easily interfaced with traditional control electronics.
We present a spiking element fabricated from superconducting nanowires that has pulse energies on the order of 10 aJ.
We demonstrate that the device reproduces essential characteristics of biological neurons, such as a refractory period and a firing threshold.
arXiv Detail & Related papers (2020-07-29T20:48:36Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.