Quantum computing model of an artificial neuron with continuously valued
input data
- URL: http://arxiv.org/abs/2007.14288v1
- Date: Tue, 28 Jul 2020 14:56:58 GMT
- Title: Quantum computing model of an artificial neuron with continuously valued
input data
- Authors: Stefano Mangini, Francesco Tacchino, Dario Gerace, Chiara
Macchiavello, Daniele Bajoni
- Abstract summary: An artificial neuron is a computational unit performing simple mathematical operations on a set of data in the form of an input vector.
We show how the implementation of a previously introduced quantum artificial neuron can be generalized to accept continuous -- instead of discrete-valued input vectors.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial neural networks have been proposed as potential algorithms that
could benefit from being implemented and run on quantum computers. In
particular, they hold promise to greatly enhance Artificial Intelligence tasks,
such as image elaboration or pattern recognition. The elementary building block
of a neural network is an artificial neuron, i.e. a computational unit
performing simple mathematical operations on a set of data in the form of an
input vector. Here we show how the design for the implementation of a
previously introduced quantum artificial neuron [npj Quant. Inf. $\textbf{5}$,
26], which fully exploits the use of superposition states to encode binary
valued input data, can be further generalized to accept continuous -- instead
of discrete-valued input vectors, without increasing the number of qubits. This
further step is crucial to allow for a direct application of an automatic
differentiation learning procedure, which would not be compatible with
binary-valued data encoding.
Related papers
- Non-binary artificial neuron with phase variation implemented on a quantum computer [0.0]
We introduce an algorithm that generalizes the binary model manipulating the phase of complex numbers.
We propose, test, and implement a neuron model that works with continuous values in a quantum computer.
arXiv Detail & Related papers (2024-10-30T18:18:53Z) - Incrementally-Computable Neural Networks: Efficient Inference for
Dynamic Inputs [75.40636935415601]
Deep learning often faces the challenge of efficiently processing dynamic inputs, such as sensor data or user inputs.
We take an incremental computing approach, looking to reuse calculations as the inputs change.
We apply this approach to the transformers architecture, creating an efficient incremental inference algorithm with complexity proportional to the fraction of modified inputs.
arXiv Detail & Related papers (2023-07-27T16:30:27Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Quantum neural networks [0.0]
This thesis combines two of the most exciting research areas of the last decades: quantum computing and machine learning.
We introduce dissipative quantum neural networks (DQNNs), which are capable of universal quantum computation and have low memory requirements while training.
arXiv Detail & Related papers (2022-05-17T07:47:00Z) - A Spiking Neural Network based on Neural Manifold for Augmenting
Intracortical Brain-Computer Interface Data [5.039813366558306]
Brain-computer interfaces (BCIs) transform neural signals in the brain into in-structions to control external devices.
With the advent of advanced machine learning methods, the capability of brain-computer interfaces has been enhanced like never before.
Here, we use spiking neural networks (SNN) as data generators.
arXiv Detail & Related papers (2022-03-26T15:32:31Z) - Quantum activation functions for quantum neural networks [0.0]
We show how to approximate any analytic function to any required accuracy without the need to measure the states encoding the information.
Our results recast the science of artificial neural networks in the architecture of gate-model quantum computers.
arXiv Detail & Related papers (2022-01-10T23:55:49Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z) - Quantum implementation of an artificial feed-forward neural network [0.0]
We show an experimental realization of an artificial feed-forward neural network implemented on a state-of-art superconducting quantum processor.
The network is made of quantum artificial neurons, which individually display a potential advantage in storage capacity.
We demonstrate that this network can be equivalently operated either via classical control or in a completely coherent fashion.
arXiv Detail & Related papers (2019-12-28T16:49:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.