Algorithm Unrolling for Massive Access via Deep Neural Network with
Theoretical Guarantee
- URL: http://arxiv.org/abs/2106.10426v1
- Date: Sat, 19 Jun 2021 05:23:05 GMT
- Title: Algorithm Unrolling for Massive Access via Deep Neural Network with
Theoretical Guarantee
- Authors: Yandong Shi, Hayoung Choi, Yuanming Shi, Yong Zhou
- Abstract summary: Massive access is a critical design challenge of Internet of Things (IoT) networks.
We consider the grant-free uplink transmission of an IoT network with a multiple-antenna base station (BS) and a large number of single-antenna IoT devices.
We propose a novel algorithm unrolling framework based on the deep neural network to simultaneously achieve low computational complexity and high robustness.
- Score: 30.86806523281873
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Massive access is a critical design challenge of Internet of Things (IoT)
networks. In this paper, we consider the grant-free uplink transmission of an
IoT network with a multiple-antenna base station (BS) and a large number of
single-antenna IoT devices. Taking into account the sporadic nature of IoT
devices, we formulate the joint activity detection and channel estimation
(JADCE) problem as a group-sparse matrix estimation problem. This problem can
be solved by applying the existing compressed sensing techniques, which however
either suffer from high computational complexities or lack of algorithm
robustness. To this end, we propose a novel algorithm unrolling framework based
on the deep neural network to simultaneously achieve low computational
complexity and high robustness for solving the JADCE problem. Specifically, we
map the original iterative shrinkage thresholding algorithm (ISTA) into an
unrolled recurrent neural network (RNN), thereby improving the convergence rate
and computational efficiency through end-to-end training. Moreover, the
proposed algorithm unrolling approach inherits the structure and domain
knowledge of the ISTA, thereby maintaining the algorithm robustness, which can
handle non-Gaussian preamble sequence matrix in massive access. With rigorous
theoretical analysis, we further simplify the unrolled network structure by
reducing the redundant training parameters. Furthermore, we prove that the
simplified unrolled deep neural network structures enjoy a linear convergence
rate. Extensive simulations based on various preamble signatures show that the
proposed unrolled networks outperform the existing methods in terms of the
convergence rate, robustness and estimation accuracy.
Related papers
- The Integrated Forward-Forward Algorithm: Integrating Forward-Forward
and Shallow Backpropagation With Local Losses [0.0]
We propose an integrated method that combines the strengths of both FFA and shallow backpropagation.
We show that training neural networks with the Integrated Forward-Forward Algorithm has the potential of generating neural networks with advantageous features like robustness.
arXiv Detail & Related papers (2023-05-22T12:10:47Z) - Q-SHED: Distributed Optimization at the Edge via Hessian Eigenvectors
Quantization [5.404315085380945]
Newton-type (NT) methods have been advocated as enablers of robust convergence rates in DO problems.
We propose Q-SHED, an original NT algorithm for DO featuring a novel bit-allocation scheme based on incremental Hessian eigenvectors quantization.
We show that Q-SHED can reduce by up to 60% the number of communication rounds required for convergence.
arXiv Detail & Related papers (2023-05-18T10:15:03Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Fidelity-Guarantee Entanglement Routing in Quantum Networks [64.49733801962198]
Entanglement routing establishes remote entanglement connection between two arbitrary nodes.
We propose purification-enabled entanglement routing designs to provide fidelity guarantee for multiple Source-Destination (SD) pairs in quantum networks.
arXiv Detail & Related papers (2021-11-15T14:07:22Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Architecture Aware Latency Constrained Sparse Neural Networks [35.50683537052815]
In this paper, we design an architecture aware latency constrained sparse framework to prune and accelerate CNN models.
We also propose a novel sparse convolution algorithm for efficient computation.
Our system-algorithm co-design framework can achieve much better frontier among network accuracy and latency on resource-constrained mobile devices.
arXiv Detail & Related papers (2021-09-01T03:41:31Z) - Quantum-inspired event reconstruction with Tensor Networks: Matrix
Product States [0.0]
We show that Networks are ideal vehicles to connect quantum mechanical concepts to machine learning techniques.
We show that entanglement entropy can be used to interpret what a network learns.
arXiv Detail & Related papers (2021-06-15T18:00:02Z) - Reduced-Order Neural Network Synthesis with Robustness Guarantees [0.0]
Machine learning algorithms are being adapted to run locally on board, potentially hardware limited, devices to improve user privacy, reduce latency and be more energy efficient.
To address this issue, a method to automatically synthesize reduced-order neural networks (having fewer neurons) approxing the input/output mapping of a larger one is introduced.
Worst-case bounds for this approximation error are obtained and the approach can be applied to a wide variety of neural networks architectures.
arXiv Detail & Related papers (2021-02-18T12:03:57Z) - Data-Driven Random Access Optimization in Multi-Cell IoT Networks with
NOMA [78.60275748518589]
Non-orthogonal multiple access (NOMA) is a key technology to enable massive machine type communications (mMTC) in 5G networks and beyond.
In this paper, NOMA is applied to improve the random access efficiency in high-density spatially-distributed multi-cell wireless IoT networks.
A novel formulation of random channel access management is proposed, in which the transmission probability of each IoT device is tuned to maximize the geometric mean of users' expected capacity.
arXiv Detail & Related papers (2021-01-02T15:21:08Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.