SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network
- URL: http://arxiv.org/abs/2106.05490v1
- Date: Thu, 10 Jun 2021 04:21:20 GMT
- Title: SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network
- Authors: Ryan Dreifuerst, Robert W. Heath Jr
- Abstract summary: We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
- Score: 79.04274563889548
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The detection and estimation of sinusoids is a fundamental signal processing
task for many applications related to sensing and communications. While
algorithms have been proposed for this setting, quantization is a critical, but
often ignored modeling effect. In wireless communications, estimation with low
resolution data converters is relevant for reduced power consumption in
wideband receivers. Similarly, low resolution sampling in imaging and spectrum
sensing allows for efficient data collection. In this work, we propose
SignalNet, a neural network architecture that detects the number of sinusoids
and estimates their parameters from quantized in-phase and quadrature samples.
We incorporate signal reconstruction internally as domain knowledge within the
network to enhance learning and surpass traditional algorithms in mean squared
error and Chamfer error. We introduce a worst-case learning threshold for
comparing the results of our network relative to the underlying data
distributions. This threshold provides insight into why neural networks tend to
outperform traditional methods and into the learned relationships between the
input and output distributions. In simulation, we find that our algorithm is
always able to surpass the threshold for three-bit data but often cannot exceed
the threshold for one-bit data. We use the learning threshold to explain, in
the one-bit case, how our estimators learn to minimize the distributional loss,
rather than learn features from the data.
Related papers
- QGait: Toward Accurate Quantization for Gait Recognition with Binarized Input [17.017127559393398]
We propose a differentiable soft quantizer, which better simulates the gradient of the round function during backpropagation.
This enables the network to learn from subtle input perturbations.
We further refine the training strategy to ensure convergence while simulating quantization errors.
arXiv Detail & Related papers (2024-05-22T17:34:18Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Performance Bounds for Neural Network Estimators: Applications in Fault
Detection [2.388501293246858]
We exploit recent results in quantifying the robustness of neural networks to construct and tune a model-based anomaly detector.
In tuning, we specifically provide upper bounds on the rate of false alarms expected under normal operation.
arXiv Detail & Related papers (2021-03-22T19:23:08Z) - Reduced-Order Neural Network Synthesis with Robustness Guarantees [0.0]
Machine learning algorithms are being adapted to run locally on board, potentially hardware limited, devices to improve user privacy, reduce latency and be more energy efficient.
To address this issue, a method to automatically synthesize reduced-order neural networks (having fewer neurons) approxing the input/output mapping of a larger one is introduced.
Worst-case bounds for this approximation error are obtained and the approach can be applied to a wide variety of neural networks architectures.
arXiv Detail & Related papers (2021-02-18T12:03:57Z) - A light neural network for modulation detection under impairments [0.0]
We present a neural network architecture able to efficiently detect modulation scheme in a portion of I/Q signals.
The number of parameters does not depend on the signal duration, which allows processing stream of data.
We have generated a dataset based on the simulation of impairments that the propagation channel and the demodulator can bring to recorded I/Q signals.
arXiv Detail & Related papers (2020-03-27T07:26:42Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z) - Data-Driven Symbol Detection via Model-Based Machine Learning [117.58188185409904]
We review a data-driven framework to symbol detection design which combines machine learning (ML) and model-based algorithms.
In this hybrid approach, well-known channel-model-based algorithms are augmented with ML-based algorithms to remove their channel-model-dependence.
Our results demonstrate that these techniques can yield near-optimal performance of model-based algorithms without knowing the exact channel input-output statistical relationship.
arXiv Detail & Related papers (2020-02-14T06:58:27Z) - On transfer learning of neural networks using bi-fidelity data for
uncertainty propagation [0.0]
We explore the application of transfer learning techniques using training data generated from both high- and low-fidelity models.
In the former approach, a neural network model mapping the inputs to the outputs of interest is trained based on the low-fidelity data.
The high-fidelity data is then used to adapt the parameters of the upper layer(s) of the low-fidelity network, or train a simpler neural network to map the output of the low-fidelity network to that of the high-fidelity model.
arXiv Detail & Related papers (2020-02-11T15:56:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.