Capacity Optimality of OAMP in Coded Large Unitarily Invariant Systems
- URL: http://arxiv.org/abs/2206.11680v1
- Date: Thu, 23 Jun 2022 13:11:20 GMT
- Title: Capacity Optimality of OAMP in Coded Large Unitarily Invariant Systems
- Authors: Lei Liu, Shansuo Liang, and Li Ping
- Abstract summary: We investigate a unitarily invariant system (LUIS) involving a unitarily invariant sensing matrix, an arbitrary fixed signal distribution, and forward error control (FEC) coding.
We show that OAMP with the optimized codes has significant performance improvement over the un-optimized ones and the well-known Turbo linear MMSE algorithm.
- Score: 9.101719525164803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates a large unitarily invariant system (LUIS) involving a
unitarily invariant sensing matrix, an arbitrary fixed signal distribution, and
forward error control (FEC) coding. Several area properties are established
based on the state evolution of orthogonal approximate message passing (OAMP)
in an un-coded LUIS. Under the assumptions that the state evolution for joint
OAMP and FEC decoding is correct and the replica method is reliable, we analyze
the achievable rate of OAMP. We prove that OAMP reaches the constrained
capacity predicted by the replica method of the LUIS with an arbitrary signal
distribution based on matched FEC coding. Meanwhile, we elaborate a constrained
capacity-achieving coding principle for LUIS, based on which irregular
low-density parity-check (LDPC) codes are optimized for binary signaling in the
simulation results. We show that OAMP with the optimized codes has significant
performance improvement over the un-optimized ones and the well-known Turbo
linear MMSE algorithm. For quadrature phase-shift keying (QPSK) modulation,
constrained capacity-approaching bit error rate (BER) performances are observed
under various channel conditions.
Related papers
- Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - The Near-optimal Performance of Quantum Error Correction Codes [2.670972517608388]
We derive the near-optimal channel fidelity, a concise and optimization-free metric for arbitrary codes and noise.
Compared to conventional optimization-based approaches, the reduced computational cost enables us to simulate systems with previously inaccessible sizes.
We analytically derive the near-optimal performance for the thermodynamic code and the Gottesman-Kitaev-Preskill (GKP) code.
arXiv Detail & Related papers (2024-01-04T01:44:53Z) - Deep Learning Assisted Multiuser MIMO Load Modulated Systems for
Enhanced Downlink mmWave Communications [68.96633803796003]
This paper is focused on multiuser load modulation arrays (MU-LMAs) which are attractive due to their low system complexity and reduced cost for millimeter wave (mmWave) multi-input multi-output (MIMO) systems.
The existing precoding algorithm for downlink MU-LMA relies on a sub-array structured (SAS) transmitter which may suffer from decreased degrees of freedom and complex system configuration.
In this paper, we conceive an MU-LMA system employing a full-array structured (FAS) transmitter and propose two algorithms accordingly.
arXiv Detail & Related papers (2023-11-08T08:54:56Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Robust Quantitative Susceptibility Mapping via Approximate Message
Passing with Parameter Estimation [14.22930572798757]
We propose a probabilistic Bayesian approach for quantitative susceptibility mapping (QSM) with built-in parameter estimation.
On the simulated Sim2Snr1 dataset, AMP-PE achieved the lowest NRMSE, DFCM and the highest SSIM.
On the in vivo datasets, AMP-PE is robust and successfully recovers the susceptibility maps using the estimated parameters.
arXiv Detail & Related papers (2022-07-29T14:38:03Z) - Sufficient-Statistic Memory AMP [12.579567275436343]
A key feature of the AMP-type algorithms is that their dynamics can be correctly described by state evolution.
This paper proposes a sufficient-statistic memory AMP (SS-MAMP) algorithm framework.
arXiv Detail & Related papers (2021-12-31T07:25:18Z) - Memory Approximate Message Passing [9.116196799517262]
Approximate message passing (AMP) is a low-cost iterative parameter-estimation technique.
This paper proposes a low-complexity memory AMP (MAMP) for unitarily-invariant matrices.
arXiv Detail & Related papers (2020-12-20T07:42:15Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Fast Immune System Inspired Hypermutation Operators for Combinatorial
Optimisation [0.0]
We propose modifications to the traditional hypermutations with mutation potential.
We show the superiority of the HMP operators to the traditional ones in an analysis of the complete standard Opt-IA AIS.
arXiv Detail & Related papers (2020-09-01T16:22:57Z) - Modal Regression based Structured Low-rank Matrix Recovery for
Multi-view Learning [70.57193072829288]
Low-rank Multi-view Subspace Learning has shown great potential in cross-view classification in recent years.
Existing LMvSL based methods are incapable of well handling view discrepancy and discriminancy simultaneously.
We propose Structured Low-rank Matrix Recovery (SLMR), a unique method of effectively removing view discrepancy and improving discriminancy.
arXiv Detail & Related papers (2020-03-22T03:57:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.