Communication-efficient Quantum Algorithm for Distributed Machine
Learning
- URL: http://arxiv.org/abs/2209.04888v1
- Date: Sun, 11 Sep 2022 15:03:58 GMT
- Title: Communication-efficient Quantum Algorithm for Distributed Machine
Learning
- Authors: Hao Tang, Boning Li, Guoqing Wang, Haowei Xu, Changhao Li, Ariel Barr,
Paola Cappellaro, Ju Li
- Abstract summary: Our quantum algorithm finds the model parameters with a communication complexity of $O(fraclog_2(N)epsilon)$, where $N$ is the number of data points and $epsilon$ is the bound on parameter errors.
The building block of our algorithm, the quantum-accelerated estimation of distributed inner product and Hamming distance, could be further applied to various tasks in distributed machine learning to accelerate communication.
- Score: 14.546892420890943
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing demands of remote detection and increasing amount of training
data make distributed machine learning under communication constraints a
critical issue. This work provides a communication-efficient quantum algorithm
that tackles two traditional machine learning problems, the least-square
fitting and softmax regression problem, in the scenario where the data set is
distributed across two parties. Our quantum algorithm finds the model
parameters with a communication complexity of $O(\frac{\log_2(N)}{\epsilon})$,
where $N$ is the number of data points and $\epsilon$ is the bound on parameter
errors. Compared to classical algorithms and other quantum algorithms that
achieve the same output task, our algorithm provides a communication advantage
in the scaling with the data volume. The building block of our algorithm, the
quantum-accelerated estimation of distributed inner product and Hamming
distance, could be further applied to various tasks in distributed machine
learning to accelerate communication.
Related papers
- Hardware-efficient variational quantum algorithm in trapped-ion quantum computer [0.0]
We study a hardware-efficient variational quantum algorithm ansatz tailored for the trapped-ion quantum simulator, HEA-TI.
We leverage programmable single-qubit rotations and global spin-spin interactions among all ions, reducing the dependence on resource-intensive two-qubit gates in conventional gate-based methods.
arXiv Detail & Related papers (2024-07-03T14:02:20Z) - Quantum Resonant Dimensionality Reduction and Its Application in Quantum Machine Learning [2.7119354495508787]
We propose a quantum resonant dimension reduction (QRDR) algorithm based on the quantum resonant transition to reduce the dimension of input data.
After QRDR, the dimension of input data $N$ can be reduced into desired scale $R$, and the effective information of the original data will be preserved.
Our algorithm has the potential to be utilized in a variety of computing fields.
arXiv Detail & Related papers (2024-05-21T09:26:18Z) - Asynchronous Local Computations in Distributed Bayesian Learning [8.516532665507835]
We propose gossip-based communication to leverage fast computations and reduce communication overhead simultaneously.
We observe faster initial convergence and improved performance accuracy, especially in the low data range.
We achieve on average 78% and over 90% classification accuracy respectively on the Gamma Telescope and mHealth data sets from the UCI ML repository.
arXiv Detail & Related papers (2023-11-06T20:11:41Z) - CORE: Common Random Reconstruction for Distributed Optimization with
Provable Low Communication Complexity [110.50364486645852]
Communication complexity has become a major bottleneck for speeding up training and scaling up machine numbers.
We propose Common Om REOm, which can be used to compress information transmitted between machines.
arXiv Detail & Related papers (2023-09-23T08:45:27Z) - Quantum Federated Learning for Distributed Quantum Networks [9.766446130011706]
We propose a quantum federated learning for distributed quantum networks by utilizing interesting characteristics of quantum mechanics.
A quantum gradient descent algorithm is provided to help clients in the distributed quantum networks to train local models.
A quantum secure multi-party computation protocol is designed, which utilizes the Chinese residual theorem.
arXiv Detail & Related papers (2022-12-25T14:37:23Z) - Communication-Efficient Adam-Type Algorithms for Distributed Data Mining [93.50424502011626]
We propose a class of novel distributed Adam-type algorithms (emphi.e., SketchedAMSGrad) utilizing sketching.
Our new algorithm achieves a fast convergence rate of $O(frac1sqrtnT + frac1(k/d)2 T)$ with the communication cost of $O(k log(d))$ at each iteration.
arXiv Detail & Related papers (2022-10-14T01:42:05Z) - A single $T$-gate makes distribution learning hard [56.045224655472865]
This work provides an extensive characterization of the learnability of the output distributions of local quantum circuits.
We show that for a wide variety of the most practically relevant learning algorithms -- including hybrid-quantum classical algorithms -- even the generative modelling problem associated with depth $d=omega(log(n))$ Clifford circuits is hard.
arXiv Detail & Related papers (2022-07-07T08:04:15Z) - Quantum Extremal Learning [0.8937790536664091]
We propose a quantum algorithm for extremal learning', which is the process of finding the input to a hidden function that extremizes the function output.
The algorithm, called quantum extremal learning (QEL), consists of a parametric quantum circuit that is variationally trained to model data input-output relationships.
arXiv Detail & Related papers (2022-05-05T17:37:26Z) - Benchmarking Small-Scale Quantum Devices on Computing Graph Edit
Distance [52.77024349608834]
Graph Edit Distance (GED) measures the degree of (dis)similarity between two graphs in terms of the operations needed to make them identical.
In this paper we present a comparative study of two quantum approaches to computing GED.
arXiv Detail & Related papers (2021-11-19T12:35:26Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Channel Assignment in Uplink Wireless Communication using Machine
Learning Approach [54.012791474906514]
This letter investigates a channel assignment problem in uplink wireless communication systems.
Our goal is to maximize the sum rate of all users subject to integer channel assignment constraints.
Due to high computational complexity, machine learning approaches are employed to obtain computational efficient solutions.
arXiv Detail & Related papers (2020-01-12T15:54:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.