Low-complexity Near-optimum Symbol Detection Based on Neural Enhancement
of Factor Graphs
- URL: http://arxiv.org/abs/2203.16417v1
- Date: Wed, 30 Mar 2022 15:58:53 GMT
- Title: Low-complexity Near-optimum Symbol Detection Based on Neural Enhancement
of Factor Graphs
- Authors: Luca Schmid, Laurent Schmalen
- Abstract summary: We consider the application of the factor graph framework for symbol detection on linear inter-symbol interference channels.
We develop and evaluate strategies to improve the performance of the factor graph-based symbol detection by means of neural enhancement.
- Score: 2.030567625639093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the application of the factor graph framework for symbol
detection on linear inter-symbol interference channels. Based on the Ungerboeck
observation model, a detection algorithm with appealing complexity properties
can be derived. However, since the underlying factor graph contains cycles, the
sum-product algorithm (SPA) yields a suboptimal algorithm. In this paper, we
develop and evaluate efficient strategies to improve the performance of the
factor graph-based symbol detection by means of neural enhancement. In
particular, we consider neural belief propagation as an effective way to
mitigate the effect of cycles within the factor graph. We also investigate the
application of factor node generalizations and pruning techniques. By applying
a generic preprocessor to the channel output, we propose a simple technique to
vary the underlying factor graph in every SPA iteration. Using this dynamic
factor graph transition, we intend to preserve the extrinsic nature of the SPA
messages which is otherwise impaired due to cycles. Simulation results show
that the proposed methods can massively improve the detection performance, even
approaching the maximum a posteriori performance for various transmission
scenarios, while preserving a complexity which is linear in both the block
length and the channel memory.
Related papers
- Learning signals defined on graphs with optimal transport and Gaussian process regression [1.1062090350704616]
In computational physics, machine learning has emerged as a powerful complementary tool to explore efficiently candidate designs in engineering studies.
We propose an innovative strategy for Gaussian process regression where inputs are large and sparse graphs with continuous node attributes and outputs are signals defined on the nodes of the associated inputs.
In addition to enabling signal prediction, the main point of our proposal is to come with confidence intervals on node values, which is crucial for uncertainty and active learning.
arXiv Detail & Related papers (2024-10-21T07:39:44Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Local Message Passing on Frustrated Systems [1.7188280334580193]
We search for an alternative message passing algorithm that works particularly well on cyclic graphs.
We replace the local SPA message update rule at the factor nodes of the underlying graph with a generic mapping, which is optimized in a data-driven fashion.
We evaluate our method for two classes of cyclic graphs: the 2x2 fully connected Ising grid and factor graphs for symbol detection on linear communication channels with inter-symbol interference.
arXiv Detail & Related papers (2023-06-02T12:42:09Z) - Structural Optimization of Factor Graphs for Symbol Detection via
Continuous Clustering and Machine Learning [1.5293427903448018]
We optimize the structure of the underlying factor graphs in an end-to-end manner using machine learning.
We study the combination of this approach with neural belief propagation, yielding near-maximum a posteriori symbol detection performance for specific channels.
arXiv Detail & Related papers (2022-11-21T12:31:04Z) - Oversquashing in GNNs through the lens of information contraction and
graph expansion [6.8222473597904845]
We present a framework for analyzing oversquashing based on information contraction.
We propose a graph rewiring algorithm aimed at alleviating oversquashing.
arXiv Detail & Related papers (2022-08-06T08:44:39Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - Neural Enhancement of Factor Graph-based Symbol Detection [2.030567625639093]
We study the application of the factor graph framework for symbol detection on linear inter-symbol interference channels.
We present and evaluate strategies to improve the performance of cyclic factor graph-based symbol detection algorithms.
arXiv Detail & Related papers (2022-03-07T12:25:24Z) - Graph Signal Restoration Using Nested Deep Algorithm Unrolling [85.53158261016331]
Graph signal processing is a ubiquitous task in many applications such as sensor, social transportation brain networks, point cloud processing, and graph networks.
We propose two restoration methods based on convexindependent deep ADMM (ADMM)
parameters in the proposed restoration methods are trainable in an end-to-end manner.
arXiv Detail & Related papers (2021-06-30T08:57:01Z) - Unrolling of Deep Graph Total Variation for Image Denoising [106.93258903150702]
In this paper, we combine classical graph signal filtering with deep feature learning into a competitive hybrid design.
We employ interpretable analytical low-pass graph filters and employ 80% fewer network parameters than state-of-the-art DL denoising scheme DnCNN.
arXiv Detail & Related papers (2020-10-21T20:04:22Z) - Fast Graph Attention Networks Using Effective Resistance Based Graph
Sparsification [70.50751397870972]
FastGAT is a method to make attention based GNNs lightweight by using spectral sparsification to generate an optimal pruning of the input graph.
We experimentally evaluate FastGAT on several large real world graph datasets for node classification tasks.
arXiv Detail & Related papers (2020-06-15T22:07:54Z) - Data-Driven Factor Graphs for Deep Symbol Detection [107.63351413549992]
We propose to implement factor graph methods in a data-driven manner.
In particular, we propose to use machine learning (ML) tools to learn the factor graph.
We demonstrate that the proposed system, referred to as BCJRNet, learns to implement the BCJR algorithm from a small training set.
arXiv Detail & Related papers (2020-01-31T09:23:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.