Tensor Ring Optimized Quantum-Enhanced Tensor Neural Networks
- URL: http://arxiv.org/abs/2310.01515v1
- Date: Mon, 2 Oct 2023 18:07:10 GMT
- Title: Tensor Ring Optimized Quantum-Enhanced Tensor Neural Networks
- Authors: Debanjan Konar, Dheeraj Peddireddy, Vaneet Aggarwal and Bijaya K.
Panigrahi
- Abstract summary: Quantum machine learning researchers often rely on incorporating Networks (TN) into Deep Neural Networks (DNN)
To address this issue, a multi-layer design of a Ring optimized variational Quantum learning classifier (Quan-TR) is proposed.
It is referred to as Ring optimized Quantum-enhanced neural Networks (TR-QNet)
On quantum simulations, the proposed TR-QNet achieves promising accuracy of $94.5%$, $86.16%$, and $83.54%$ on the Iris, MNIST, and CIFAR-10 datasets, respectively.
- Score: 32.76948546010625
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum machine learning researchers often rely on incorporating Tensor
Networks (TN) into Deep Neural Networks (DNN) and variational optimization.
However, the standard optimization techniques used for training the contracted
trainable weights of each model layer suffer from the correlations and
entanglement structure between the model parameters on classical
implementations. To address this issue, a multi-layer design of a Tensor Ring
optimized variational Quantum learning classifier (Quan-TR) comprising
cascading entangling gates replacing the fully connected (dense) layers of a TN
is proposed, and it is referred to as Tensor Ring optimized Quantum-enhanced
tensor neural Networks (TR-QNet). TR-QNet parameters are optimized through the
stochastic gradient descent algorithm on qubit measurements. The proposed
TR-QNet is assessed on three distinct datasets, namely Iris, MNIST, and
CIFAR-10, to demonstrate the enhanced precision achieved for binary
classification. On quantum simulations, the proposed TR-QNet achieves promising
accuracy of $94.5\%$, $86.16\%$, and $83.54\%$ on the Iris, MNIST, and CIFAR-10
datasets, respectively. Benchmark studies have been conducted on
state-of-the-art quantum and classical implementations of TN models to show the
efficacy of the proposed TR-QNet. Moreover, the scalability of TR-QNet
highlights its potential for exhibiting in deep learning applications on a
large scale. The PyTorch implementation of TR-QNet is available on
Github:https://github.com/konar1987/TR-QNet/
Related papers
- Histogram-Equalized Quantization for logic-gated Residual Neural Networks [2.7036595757881323]
Histogram-Equalized Quantization (HEQ) is an adaptive framework for linear symmetric quantization.
HEQ automatically adapts the quantization thresholds using a unique step size optimization.
Experiments on the STL-10 dataset even show that HEQ enables a proper training of our proposed logic-gated (OR, MUX) residual networks.
arXiv Detail & Related papers (2025-01-08T14:06:07Z) - Variational Tensor Neural Networks for Deep Learning [0.0]
We propose an integration of tensor networks (TN) into deep neural networks (NNs)
This in turn, results in a scalable tensor neural network (TNN) architecture capable of efficient training over a large parameter space.
We validate the accuracy and efficiency of our method by designing TNN models and providing benchmark results for linear and non-linear regressions, data classification and image recognition on MNIST handwritten digits.
arXiv Detail & Related papers (2022-11-26T20:24:36Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Tensor Ring Parametrized Variational Quantum Circuits for Large Scale
Quantum Machine Learning [28.026962110693695]
We propose an algorithm that compresses the quantum state within the circuit using a tensor ring representation.
The storage and computational time increases linearly in the number of qubits and number of layers, as compared to the exponential increase with exact simulation algorithms.
We achieve a test accuracy of 83.33% on Iris dataset and a maximum of 99.30% and 76.31% on binary and ternary classification of MNIST dataset.
arXiv Detail & Related papers (2022-01-21T19:54:57Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - A Fully Tensorized Recurrent Neural Network [48.50376453324581]
We introduce a "fully tensorized" RNN architecture which jointly encodes the separate weight matrices within each recurrent cell.
This approach reduces model size by several orders of magnitude, while still maintaining similar or better performance compared to standard RNNs.
arXiv Detail & Related papers (2020-10-08T18:24:12Z) - Propagating Asymptotic-Estimated Gradients for Low Bitwidth Quantized
Neural Networks [31.168156284218746]
We propose a novel Asymptotic-Quantized Estimator (AQE) to estimate the gradient.
At the end of training, the weights and activations have been quantized to low-precision.
In the inference phase, we can use XNOR or SHIFT operations instead of convolution operations to accelerate the MINW-Net.
arXiv Detail & Related papers (2020-03-04T03:17:47Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z) - Training of Quantized Deep Neural Networks using a Magnetic Tunnel
Junction-Based Synapse [23.08163992580639]
Quantized neural networks (QNNs) are being actively researched as a solution for the computational complexity and memory intensity of deep neural networks.
We show how magnetic tunnel junction (MTJ) devices can be used to support QNN training.
We introduce a novel synapse circuit that uses the MTJ behavior to support the quantize update.
arXiv Detail & Related papers (2019-12-29T11:36:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.