Domain Adaptation: the Key Enabler of Neural Network Equalizers in
Coherent Optical Systems
- URL: http://arxiv.org/abs/2202.12689v1
- Date: Fri, 25 Feb 2022 13:46:33 GMT
- Title: Domain Adaptation: the Key Enabler of Neural Network Equalizers in
Coherent Optical Systems
- Authors: Pedro J. Freire, Bernhard Spinnler, Daniel Abode, Jaroslaw E.
Prilepsky, Abdallah A. I. Ali, Nelson Costa, Wolfgang Schairer, Antonio
Napoli, Andrew D. Ellis, Sergei K. Turitsyn
- Abstract summary: We introduce the domain adaptation and randomization approach for calibrating neural network-based equalizers for real transmissions.
The approach renders up to 99% training process reduction.
- Score: 1.4549914190846531
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce the domain adaptation and randomization approach for calibrating
neural network-based equalizers for real transmissions, using synthetic data.
The approach renders up to 99\% training process reduction, which we
demonstrate in three experimental setups.
Related papers
- Improving Generalization of Deep Neural Networks by Optimum Shifting [33.092571599896814]
We propose a novel method called emphoptimum shifting, which changes the parameters of a neural network from a sharp minimum to a flatter one.
Our method is based on the observation that when the input and output of a neural network are fixed, the matrix multiplications within the network can be treated as systems of under-determined linear equations.
arXiv Detail & Related papers (2024-05-23T02:31:55Z) - An Analytic Solution to Covariance Propagation in Neural Networks [10.013553984400488]
This paper presents a sample-free moment propagation technique to accurately characterize the input-output distributions of neural networks.
A key enabler of our technique is an analytic solution for the covariance of random variables passed through nonlinear activation functions.
The wide applicability and merits of the proposed technique are shown in experiments analyzing the input-output distributions of trained neural networks and training Bayesian neural networks.
arXiv Detail & Related papers (2024-03-24T14:08:24Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Reparameterization through Spatial Gradient Scaling [69.27487006953852]
Reparameterization aims to improve the generalization of deep neural networks by transforming convolutional layers into equivalent multi-branched structures during training.
We present a novel spatial gradient scaling method to redistribute learning focus among weights in convolutional networks.
arXiv Detail & Related papers (2023-03-05T17:57:33Z) - Transfer Learning Enhanced Full Waveform Inversion [2.3020018305241337]
We propose a way to favorably employ neural networks in the field of non-destructive testing using Full Waveform Inversion (FWI)
The presented methodology discretizes the unknown material distribution in the domain with a neural network within an adjoint optimization.
arXiv Detail & Related papers (2023-02-22T10:12:07Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Binaural Rendering of Ambisonic Signals by Neural Networks [28.056334728309423]
Experimental results show that neural networks outperform the conventional method in objective metrics and achieve comparable subjective metrics.
Our proposed system achieves an SDR of 7.32 and MOSs of 3.83, 3.58, 3.87, 3.58 in quality, timbre, localization, and immersion dimensions.
arXiv Detail & Related papers (2022-11-04T07:57:37Z) - Feature Alignment for Approximated Reversibility in Neural Networks [0.0]
We introduce feature alignment, a technique for obtaining approximate reversibility in artificial neural networks.
We show that the technique can be modified for training neural networks locally, saving computational memory resources.
arXiv Detail & Related papers (2021-06-23T17:42:47Z) - PILOT: Introducing Transformers for Probabilistic Sound Event
Localization [107.78964411642401]
This paper introduces a novel transformer-based sound event localization framework, where temporal dependencies in the received multi-channel audio signals are captured via self-attention mechanisms.
The framework is evaluated on three publicly available multi-source sound event localization datasets and compared against state-of-the-art methods in terms of localization error and event detection accuracy.
arXiv Detail & Related papers (2021-06-07T18:29:19Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.