Quantum-Inspired Edge Detection Algorithms Implementation using New
Dynamic Visual Data Representation and Short-Length Convolution Computation
- URL: http://arxiv.org/abs/2210.17490v1
- Date: Mon, 31 Oct 2022 17:13:27 GMT
- Title: Quantum-Inspired Edge Detection Algorithms Implementation using New
Dynamic Visual Data Representation and Short-Length Convolution Computation
- Authors: Artyom M. Grigoryan, Sos S. Agaian, Karen Panetta
- Abstract summary: This paper studies a new paired transform-based quantum representation and computation of one-dimensional and 2-D signals convolutions and gradients.
The new data representation is demonstrated on multiple illustrative examples for quantum edge detection, gradients, and convolution.
- Score: 6.950510860295866
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the availability of imagery data continues to swell, so do the demands on
transmission, storage and processing power. Processing requirements to handle
this plethora of data is quickly outpacing the utility of conventional
processing techniques. Transitioning to quantum processing and algorithms that
offer promising efficiencies over conventional methods can address some of
these issues. However, to make this transformation possible, fundamental issues
of implementing real time Quantum algorithms must be overcome for crucial
processes needed for intelligent analysis applications. For example, consider
edge detection tasks which require time-consuming acquisition processes and are
further hindered by the complexity of the devices used thus limiting
feasibility for implementation in real-time applications. Convolution is
another example of an operation that is essential for signal and image
processing applications, where the mathematical operations consist of an
intelligent mixture of multiplication and addition that require considerable
computational resources. This paper studies a new paired transform-based
quantum representation and computation of one-dimensional and 2-D signals
convolutions and gradients. A new visual data representation is defined to
simplify convolution calculations making it feasible to parallelize convolution
and gradient operations for more efficient performance. The new data
representation is demonstrated on multiple illustrative examples for quantum
edge detection, gradients, and convolution. Furthermore, the efficiency of the
proposed approach is shown on real-world images.
Related papers
- Accelerating Error Correction Code Transformers [56.75773430667148]
We introduce a novel acceleration method for transformer-based decoders.
We achieve a 90% compression ratio and reduce arithmetic operation energy consumption by at least 224 times on modern hardware.
arXiv Detail & Related papers (2024-10-08T11:07:55Z) - Dynamic Range Reduction via Branch-and-Bound [1.533133219129073]
Key strategy to enhance hardware accelerators is the reduction of precision in arithmetic operations.
This paper introduces a fully principled Branch-and-Bound algorithm for reducing precision needs in QUBO problems.
Experiments validate our algorithm's effectiveness on an actual quantum annealer.
arXiv Detail & Related papers (2024-09-17T03:07:56Z) - Fast, Scalable, Warm-Start Semidefinite Programming with Spectral
Bundling and Sketching [53.91395791840179]
We present Unified Spectral Bundling with Sketching (USBS), a provably correct, fast and scalable algorithm for solving massive SDPs.
USBS provides a 500x speed-up over the state-of-the-art scalable SDP solver on an instance with over 2 billion decision variables.
arXiv Detail & Related papers (2023-12-19T02:27:22Z) - Incrementally-Computable Neural Networks: Efficient Inference for
Dynamic Inputs [75.40636935415601]
Deep learning often faces the challenge of efficiently processing dynamic inputs, such as sensor data or user inputs.
We take an incremental computing approach, looking to reuse calculations as the inputs change.
We apply this approach to the transformers architecture, creating an efficient incremental inference algorithm with complexity proportional to the fraction of modified inputs.
arXiv Detail & Related papers (2023-07-27T16:30:27Z) - Scaled Quantization for the Vision Transformer [0.0]
Quantization using a small number of bits shows promise for reducing latency and memory usage in deep neural networks.
This paper proposes a robust method for the full integer quantization of vision transformer networks without requiring any intermediate floating-point computations.
arXiv Detail & Related papers (2023-03-23T18:31:21Z) - Improved FRQI on superconducting processors and its restrictions in the
NISQ era [62.997667081978825]
We study the feasibility of the Flexible Representation of Quantum Images (FRQI)
We also check experimentally what is the limit in the current noisy intermediate-scale quantum era.
We propose a method for simplifying the circuits needed for the FRQI.
arXiv Detail & Related papers (2021-10-29T10:42:43Z) - Parameterized process characterization with reduced resource
requirements [0.5735035463793008]
This work proposes an alternative approach that requires significantly fewer resources for unitary process characterization without prior knowledge of the process.
By measuring the quantum process as rotated through the X and Y axes on the Sphere Bloch, we can acquire enough information to reconstruct the quantum process matrix $chi$ and measure its fidelity.
We demonstrate in numerical experiments that the method can improve gate fidelity via a noise reduction in the imaginary part of the process matrix, along with a stark decrease in the number of experiments needed to perform the characterization.
arXiv Detail & Related papers (2021-09-22T17:41:32Z) - Post-Training Quantization for Vision Transformer [85.57953732941101]
We present an effective post-training quantization algorithm for reducing the memory storage and computational costs of vision transformers.
We can obtain an 81.29% top-1 accuracy using DeiT-B model on ImageNet dataset with about 8-bit quantization.
arXiv Detail & Related papers (2021-06-27T06:27:22Z) - Variational Quantum Optimization with Multi-Basis Encodings [62.72309460291971]
We introduce a new variational quantum algorithm that benefits from two innovations: multi-basis graph complexity and nonlinear activation functions.
Our results in increased optimization performance, two increase in effective landscapes and a reduction in measurement progress.
arXiv Detail & Related papers (2021-06-24T20:16:02Z) - As Accurate as Needed, as Efficient as Possible: Approximations in
DD-based Quantum Circuit Simulation [5.119310422637946]
Decision Diagrams (DDs) have previously shown to reduce the required memory in many important cases by exploiting redundancies in the quantum state.
We show that this reduction can be amplified by exploiting the probabilistic nature of quantum computers to achieve even more compact representations.
Specifically, we propose two new DD-based simulation strategies that approximate the quantum states to attain more compact representations.
arXiv Detail & Related papers (2020-12-10T12:02:03Z) - Accelerating Neural Network Inference by Overflow Aware Quantization [16.673051600608535]
Inherited heavy computation of deep neural networks prevents their widespread applications.
We propose an overflow aware quantization method by designing trainable adaptive fixed-point representation.
With the proposed method, we are able to fully utilize the computing power to minimize the quantization loss and obtain optimized inference performance.
arXiv Detail & Related papers (2020-05-27T11:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.