Encoding Optimization for Low-Complexity Spiking Neural Network Equalizers in IM/DD Systems
- URL: http://arxiv.org/abs/2508.13783v1
- Date: Tue, 19 Aug 2025 12:32:13 GMT
- Title: Encoding Optimization for Low-Complexity Spiking Neural Network Equalizers in IM/DD Systems
- Authors: Eike-Manuel Edelmann, Alexander von Bank, Laurent Schmalen,
- Abstract summary: We propose a reinforcement learning-based algorithm to optimize spiking neural networks (SNNs)<n>applied to an SNN-based equalizer and demapper in an IM/DD system, the method improves performance while reducing computational load and network size.
- Score: 49.34817254755008
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural encoding parameters for spiking neural networks (SNNs) are typically set heuristically. We propose a reinforcement learning-based algorithm to optimize them. Applied to an SNN-based equalizer and demapper in an IM/DD system, the method improves performance while reducing computational load and network size.
Related papers
- Precoder Learning for Weighted Sum Rate Maximization [5.305346885414619]
We propose a novel deep neural network (DNN) to learn the precoder for weighted sum precoding (WSRM)<n>Compared to existing unitarys, the proposed DNN leverage the joint and permutation balances inherent in the optimal precoding policy.<n> Simulation results demonstrate that the proposed method significantly outperforms learning methods in terms of both learning and generalization performance.
arXiv Detail & Related papers (2025-03-06T14:45:38Z) - Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.<n>A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.<n>The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - Energy-efficient Spiking Neural Network Equalization for IM/DD Systems
with Optimized Neural Encoding [53.909333359654276]
We propose an energy-efficient equalizer for IM/DD systems based on spiking neural networks.
We optimize a neural spike encoding that boosts the equalizer's performance while decreasing energy consumption.
arXiv Detail & Related papers (2023-12-20T10:45:24Z) - Neural information coding for efficient spike-based image denoising [0.5156484100374058]
In this work we investigate Spiking Neural Networks (SNNs) for Gaussian denoising.
We propose a formal analysis of the information conversion processing carried out by the Leaky Integrate and Fire (LIF) neurons.
We compare its performance with the classical rate-coding mechanism.
Our results show that SNNs with LIF neurons can provide competitive denoising performance but at a reduced computational cost.
arXiv Detail & Related papers (2023-05-15T09:05:32Z) - Spiking Neural Network Decision Feedback Equalization for IM/DD Systems [70.3497683558609]
A spiking neural network (SNN) equalizer with a decision feedback structure is applied to an IM/DD link with various parameters.
The SNN outperforms linear and artificial neural network (ANN) based equalizers.
arXiv Detail & Related papers (2023-04-27T12:49:31Z) - SA-CNN: Application to text categorization issues using simulated
annealing-based convolutional neural network optimization [0.0]
Convolutional neural networks (CNNs) are a representative class of deep learning algorithms.
We introduce SA-CNN neural networks for text classification tasks based on Text-CNN neural networks.
arXiv Detail & Related papers (2023-03-13T14:27:34Z) - Stochastic Markov Gradient Descent and Training Low-Bit Neural Networks [77.34726150561087]
We introduce Gradient Markov Descent (SMGD), a discrete optimization method applicable to training quantized neural networks.
We provide theoretical guarantees of algorithm performance as well as encouraging numerical results.
arXiv Detail & Related papers (2020-08-25T15:48:15Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - A Supervised Learning Algorithm for Multilayer Spiking Neural Networks
Based on Temporal Coding Toward Energy-Efficient VLSI Processor Design [2.6872737601772956]
Spiking neural networks (SNNs) are brain-inspired mathematical models with the ability to process information in the form of spikes.
We propose a novel supervised learning algorithm for SNNs based on temporal coding.
arXiv Detail & Related papers (2020-01-08T03:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.