Reducing Computational Complexity of Neural Networks in Optical Channel
Equalization: From Concepts to Implementation
- URL: http://arxiv.org/abs/2208.12866v1
- Date: Fri, 26 Aug 2022 21:00:05 GMT
- Title: Reducing Computational Complexity of Neural Networks in Optical Channel
Equalization: From Concepts to Implementation
- Authors: Pedro J. Freire, Antonio Napoli, Diego Arguello Ron, Bernhard
Spinnler, Michael Anderson, Wolfgang Schairer, Thomas Bex, Nelson Costa,
Sergei K. Turitsyn, Jaroslaw E. Prilepsky
- Abstract summary: We show that it is possible to design an NN-based equalizer that is simpler to implement and has better performance than the conventional digital back-propagation (DBP) equalizer with only one step per span.
An equalizer based on NN can also achieve superior performance while still maintaining the same degree of complexity as the full electronic chromatic dispersion compensation block.
- Score: 1.6987798749419218
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, a new methodology is proposed that allows for the
low-complexity development of neural network (NN) based equalizers for the
mitigation of impairments in high-speed coherent optical transmission systems.
In this work, we provide a comprehensive description and comparison of various
deep model compression approaches that have been applied to feed-forward and
recurrent NN designs. Additionally, we evaluate the influence these strategies
have on the performance of each NN equalizer. Quantization, weight clustering,
pruning, and other cutting-edge strategies for model compression are taken into
consideration. In this work, we propose and evaluate a Bayesian
optimization-assisted compression, in which the hyperparameters of the
compression are chosen to simultaneously reduce complexity and improve
performance. In conclusion, the trade-off between the complexity of each
compression approach and its performance is evaluated by utilizing both
simulated and experimental data in order to complete the analysis. By utilizing
optimal compression approaches, we show that it is possible to design an
NN-based equalizer that is simpler to implement and has better performance than
the conventional digital back-propagation (DBP) equalizer with only one step
per span. This is accomplished by reducing the number of multipliers used in
the NN equalizer after applying the weighted clustering and pruning algorithms.
Furthermore, we demonstrate that an equalizer based on NN can also achieve
superior performance while still maintaining the same degree of complexity as
the full electronic chromatic dispersion compensation block. We conclude our
analysis by highlighting open questions and existing challenges, as well as
possible future research directions.
Related papers
- Adaptive Error-Bounded Hierarchical Matrices for Efficient Neural Network Compression [0.0]
This paper introduces a dynamic, error-bounded hierarchical matrix (H-matrix) compression method tailored for Physics-Informed Neural Networks (PINNs)
The proposed approach reduces the computational complexity and memory demands of large-scale physics-based models while preserving the essential properties of the Neural Tangent Kernel (NTK)
Empirical results demonstrate that this technique outperforms traditional compression methods, such as Singular Value Decomposition (SVD), pruning, and quantization, by maintaining high accuracy and improving generalization capabilities.
arXiv Detail & Related papers (2024-09-11T05:55:51Z) - SICNN: Soft Interference Cancellation Inspired Neural Network Equalizers [1.6451639748812472]
We propose a novel neural network (NN)-based approach, referred to as SICNN.
SICNN is designed by deep unfolding a model-based iterative soft interference cancellation (SIC) method.
We compare the bit error ratio performance of the proposed NN-based equalizers with state-of-the-art model-based and NN-based approaches.
arXiv Detail & Related papers (2023-08-24T06:40:54Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Rate Distortion Characteristic Modeling for Neural Image Compression [59.25700168404325]
End-to-end optimization capability offers neural image compression (NIC) superior lossy compression performance.
distinct models are required to be trained to reach different points in the rate-distortion (R-D) space.
We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep network and statistical modeling.
arXiv Detail & Related papers (2021-06-24T12:23:05Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z) - Structured Sparsification with Joint Optimization of Group Convolution
and Channel Shuffle [117.95823660228537]
We propose a novel structured sparsification method for efficient network compression.
The proposed method automatically induces structured sparsity on the convolutional weights.
We also address the problem of inter-group communication with a learnable channel shuffle mechanism.
arXiv Detail & Related papers (2020-02-19T12:03:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.