Model-based learning for location-to-channel mapping
- URL: http://arxiv.org/abs/2308.14370v1
- Date: Mon, 28 Aug 2023 07:39:53 GMT
- Title: Model-based learning for location-to-channel mapping
- Authors: Baptiste Chatelier (IETR, MERCE-France, INSA Rennes), Luc Le Magoarou
(IETR, INSA Rennes), Vincent Corlay (MERCE-France), Matthieu Crussi\`ere
(IETR, INSA Rennes)
- Abstract summary: This paper presents a frugal, model-based network that separates the low frequency from the high frequency components of the target mapping function.
This yields an hypernetwork architecture where the neural network only learns low frequency sparse coefficients in a dictionary of high frequency components.
Simulation results show that the proposed neural network outperforms standard approaches on realistic synthetic data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern communication systems rely on accurate channel estimation to achieve
efficient and reliable transmission of information. As the communication
channel response is highly related to the user's location, one can use a neural
network to map the user's spatial coordinates to the channel coefficients.
However, these latter are rapidly varying as a function of the location, on the
order of the wavelength. Classical neural architectures being biased towards
learning low frequency functions (spectral bias), such mapping is therefore
notably difficult to learn. In order to overcome this limitation, this paper
presents a frugal, model-based network that separates the low frequency from
the high frequency components of the target mapping function. This yields an
hypernetwork architecture where the neural network only learns low frequency
sparse coefficients in a dictionary of high frequency components. Simulation
results show that the proposed neural network outperforms standard approaches
on realistic synthetic data.
Related papers
- Model-based learning for multi-antenna multi-frequency location-to-channel mapping [6.067275317776295]
Implicit Neural Representation literature showed that classical neural architecture are biased towards learning low-frequency content.
This paper leverages the model-based machine learning paradigm to derive a problem-specific neural architecture from a propagation channel model.
arXiv Detail & Related papers (2024-06-17T13:09:25Z) - Histogram Layer Time Delay Neural Networks for Passive Sonar
Classification [58.720142291102135]
A novel method combines a time delay neural network and histogram layer to incorporate statistical contexts for improved feature learning and underwater acoustic target classification.
The proposed method outperforms the baseline model, demonstrating the utility in incorporating statistical contexts for passive sonar target recognition.
arXiv Detail & Related papers (2023-07-25T19:47:26Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - A Scalable Walsh-Hadamard Regularizer to Overcome the Low-degree
Spectral Bias of Neural Networks [79.28094304325116]
Despite the capacity of neural nets to learn arbitrary functions, models trained through gradient descent often exhibit a bias towards simpler'' functions.
We show how this spectral bias towards low-degree frequencies can in fact hurt the neural network's generalization on real-world datasets.
We propose a new scalable functional regularization scheme that aids the neural network to learn higher degree frequencies.
arXiv Detail & Related papers (2023-05-16T20:06:01Z) - Frequency and Scale Perspectives of Feature Extraction [5.081561820537235]
We analyze the sensitivity of neural networks to frequencies and scales.
We find that neural networks have low- and medium-frequency biases but also prefer different frequency bands for different classes.
These observations lead to the hypothesis that neural networks must learn the ability to extract features at various scales and frequencies.
arXiv Detail & Related papers (2023-02-24T06:37:36Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Understanding robustness and generalization of artificial neural
networks through Fourier masks [8.94889125739046]
Recent literature suggests that robust networks with good generalization properties tend to be biased towards processing low frequencies in images.
We develop an algorithm that allows us to learn modulatory masks highlighting the essential input frequencies needed for preserving a trained network's performance.
arXiv Detail & Related papers (2022-03-16T17:32:00Z) - Three-Way Deep Neural Network for Radio Frequency Map Generation and
Source Localization [67.93423427193055]
Monitoring wireless spectrum over spatial, temporal, and frequency domains will become a critical feature in beyond-5G and 6G communication technologies.
In this paper, we present a Generative Adversarial Network (GAN) machine learning model to interpolate irregularly distributed measurements across the spatial domain.
arXiv Detail & Related papers (2021-11-23T22:25:10Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Centimeter-Level Indoor Localization using Channel State Information
with Recurrent Neural Networks [12.193558591962754]
This paper proposes the neural network method to estimate the centimeter-level indoor positioning with real CSI data collected from linear antennas.
It utilizes an amplitude of channel response or a correlation matrix as the input, which can highly reduce the data size and suppress the noise.
Also, it makes use of the consistency in the user motion trajectory via Recurrent Neural Network (RNN) and signal-noise ratio (SNR) information, which can further improve the estimation accuracy.
arXiv Detail & Related papers (2020-02-04T17:10:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.