On Neural Architectures for Deep Learning-based Source Separation of
Co-Channel OFDM Signals
- URL: http://arxiv.org/abs/2303.06438v1
- Date: Sat, 11 Mar 2023 16:29:13 GMT
- Title: On Neural Architectures for Deep Learning-based Source Separation of
Co-Channel OFDM Signals
- Authors: Gary C.F. Lee and Amir Weiss and Alejandro Lancho and Yury Polyanskiy
and Gregory W. Wornell
- Abstract summary: We study the single-channel source separation problem involving frequency-division multiplexing (OFDM) signals.
We propose critical domain-informed modifications to the network parameterization, based on insights from OFDM structures.
- Score: 104.11663769306566
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We study the single-channel source separation problem involving orthogonal
frequency-division multiplexing (OFDM) signals, which are ubiquitous in many
modern-day digital communication systems. Related efforts have been pursued in
monaural source separation, where state-of-the-art neural architectures have
been adopted to train an end-to-end separator for audio signals (as
1-dimensional time series). In this work, through a prototype problem based on
the OFDM source model, we assess -- and question -- the efficacy of using
audio-oriented neural architectures in separating signals based on features
pertinent to communication waveforms. Perhaps surprisingly, we demonstrate that
in some configurations, where perfect separation is theoretically attainable,
these audio-oriented neural architectures perform poorly in separating
co-channel OFDM waveforms. Yet, we propose critical domain-informed
modifications to the network parameterization, based on insights from OFDM
structures, that can confer about 30 dB improvement in performance.
Related papers
- MIMO-DBnet: Multi-channel Input and Multiple Outputs DOA-aware
Beamforming Network for Speech Separation [55.533789120204055]
We propose an end-to-end beamforming network for direction guided speech separation given merely the mixture signal.
Specifically, we design a multi-channel input and multiple outputs architecture to predict the direction-of-arrival based embeddings and beamforming weights for each source.
arXiv Detail & Related papers (2022-12-07T01:52:40Z) - Data-Driven Blind Synchronization and Interference Rejection for Digital
Communication Signals [98.95383921866096]
We study the potential of data-driven deep learning methods for separation of two communication signals from an observation of their mixture.
We show that capturing high-resolution temporal structures (nonstationarities) leads to substantial performance gains.
We propose a domain-informed neural network (NN) design that is able to improve upon both "off-the-shelf" NNs and classical detection and interference rejection methods.
arXiv Detail & Related papers (2022-09-11T14:10:37Z) - Implicit Neural Spatial Filtering for Multichannel Source Separation in
the Waveform Domain [131.74762114632404]
The model is trained end-to-end and performs spatial processing implicitly.
We evaluate the proposed model on a real-world dataset and show that the model matches the performance of an oracle beamformer.
arXiv Detail & Related papers (2022-06-30T17:13:01Z) - Multi-Channel End-to-End Neural Diarization with Distributed Microphones [53.99406868339701]
We replace Transformer encoders in EEND with two types of encoders that process a multi-channel input.
We also propose a model adaptation method using only single-channel recordings.
arXiv Detail & Related papers (2021-10-10T03:24:03Z) - Compute and memory efficient universal sound source separation [23.152611264259225]
We provide a family of efficient neural network architectures for general purpose audio source separation.
The backbone structure of this convolutional network is the SUccessive DOwnsampling and Resampling of Multi-Resolution Features (SuDoRM-RF)
Our experiments show that SuDoRM-RF models perform comparably and even surpass several state-of-the-art benchmarks.
arXiv Detail & Related papers (2021-03-03T19:16:53Z) - Deep Joint Source Channel Coding for WirelessImage Transmission with
OFDM [6.799021090790035]
The proposed encoder and decoder use convolutional neural networks (CNN) and directly map the source images to complex-valued baseband samples.
The proposed model-driven machine learning approach eliminates the need for separate source and channel coding.
Our method is shown to be robust against non-linear signal clipping in OFDM for various channel conditions.
arXiv Detail & Related papers (2021-01-05T22:27:20Z) - DBNET: DOA-driven beamforming network for end-to-end farfield sound
source separation [20.200763595732912]
We propose a direction-of-arrival-driven beamforming network (DBnet) for end-to-end source separation.
We also propose end-to-end extensions of DBnet which incorporate post masking networks.
The experimental results show that the proposed extended DBnet using a convolutional-recurrent post masking network outperforms state-of-the-art source separation methods.
arXiv Detail & Related papers (2020-10-22T09:52:05Z) - Multi-Tones' Phase Coding (MTPC) of Interaural Time Difference by
Spiking Neural Network [68.43026108936029]
We propose a pure spiking neural network (SNN) based computational model for precise sound localization in the noisy real-world environment.
We implement this algorithm in a real-time robotic system with a microphone array.
The experiment results show a mean error azimuth of 13 degrees, which surpasses the accuracy of the other biologically plausible neuromorphic approach for sound source localization.
arXiv Detail & Related papers (2020-07-07T08:22:56Z) - Deep Receiver Design for Multi-carrier Waveforms Using CNNs [8.9379057739817]
We propose to use a convolutional neural network (CNN) for jointly detection and demodulation of the received signal at the receiver in wireless environments.
We compare our proposed architecture to the classical methods and demonstrate that our proposed CNN-based architecture can perform better on different multi-carrier forms.
arXiv Detail & Related papers (2020-06-02T10:29:05Z) - DDSP: Differentiable Digital Signal Processing [13.448630251745163]
We introduce the Differentiable Digital Signal Processing (DDSP) library, which enables direct integration of classic signal processing elements with deep learning methods.
We achieve high-fidelity generation without the need for large autoregressive models or adversarial losses.
P enables an interpretable and modular approach to generative modeling, without sacrificing the benefits of deep learning.
arXiv Detail & Related papers (2020-01-14T06:49:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.