Federated Edge Learning with Misaligned Over-The-Air Computation
- URL: http://arxiv.org/abs/2102.13604v1
- Date: Fri, 26 Feb 2021 17:19:56 GMT
- Title: Federated Edge Learning with Misaligned Over-The-Air Computation
- Authors: Yulin Shao, Deniz Gunduz, Soung Chang Liew
- Abstract summary: Over-the-air computation (OAC) is a promising technique to realize fast model aggregation in the uplink of federated edge learning.
How to design the maximum likelihood (ML) estimator in the presence of residual channel-gain mismatch and asynchronies is an open problem.
This paper formulates the problem of misaligned OAC for federated edge learning and puts forth a whitened matched filtering and sampling scheme.
- Score: 36.39188653838991
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over-the-air computation (OAC) is a promising technique to realize fast model
aggregation in the uplink of federated edge learning. OAC, however, hinges on
accurate channel-gain precoding and strict synchronization among the edge
devices, which are challenging in practice. As such, how to design the maximum
likelihood (ML) estimator in the presence of residual channel-gain mismatch and
asynchronies is an open problem. To fill this gap, this paper formulates the
problem of misaligned OAC for federated edge learning and puts forth a whitened
matched filtering and sampling scheme to obtain oversampled, but independent,
samples from the misaligned and overlapped signals. Given the whitened samples,
a sum-product ML estimator and an aligned-sample estimator are devised to
estimate the arithmetic sum of the transmitted symbols. In particular, the
computational complexity of our sum-product ML estimator is linear in the
packet length and hence is significantly lower than the conventional ML
estimator. Extensive simulations on the test accuracy versus the average
received energy per symbol to noise power spectral density ratio (EsN0) yield
two main results: 1) In the low EsN0 regime, the aligned-sample estimator can
achieve superior test accuracy provided that the phase misalignment is
non-severe. In contrast, the ML estimator does not work well due to the error
propagation and noise enhancement in the estimation process. 2) In the high
EsN0 regime, the ML estimator attains the optimal learning performance
regardless of the severity of phase misalignment. On the other hand, the
aligned-sample estimator suffers from a test-accuracy loss caused by phase
misalignment.
Related papers
- Fundamental limits of Non-Linear Low-Rank Matrix Estimation [18.455890316339595]
Bayes-optimal performances are characterized by an equivalent Gaussian model with an effective prior.
We show that to reconstruct the signal accurately, one requires a signal-to-noise ratio growing as $Nfrac 12 (1-1/k_F)$, where $k_F$ is the first non-zero Fisher information coefficient of the function.
arXiv Detail & Related papers (2024-03-07T05:26:52Z) - Quantum metrology in a lossless Mach-Zehnder interferometer using
entangled photon inputs [0.0]
We estimate the phase uncertainty in a noiseless Mach-Zehnder interferometer using photon-counting detection.
We first devise an estimation and measurement strategy that yields the lowest phase uncertainty for a single measurement.
arXiv Detail & Related papers (2023-10-03T13:43:02Z) - Asymptotically Unbiased Instance-wise Regularized Partial AUC
Optimization: Theory and Algorithm [101.44676036551537]
One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC) measures the average performance of a binary classifier.
Most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable.
We present a simpler reformulation of the PAUC problem via distributional robust optimization AUC.
arXiv Detail & Related papers (2022-10-08T08:26:22Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Differentiable Annealed Importance Sampling and the Perils of Gradient
Noise [68.44523807580438]
Annealed importance sampling (AIS) and related algorithms are highly effective tools for marginal likelihood estimation.
Differentiability is a desirable property as it would admit the possibility of optimizing marginal likelihood as an objective.
We propose a differentiable algorithm by abandoning Metropolis-Hastings steps, which further unlocks mini-batch computation.
arXiv Detail & Related papers (2021-07-21T17:10:14Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - One-Bit Compressed Sensing via One-Shot Hard Thresholding [7.594050968868919]
A problem of 1-bit compressed sensing is to estimate a sparse signal from a few binary measurements.
We present a novel and concise analysis that moves away from the widely used non-constrained notion of width.
arXiv Detail & Related papers (2020-07-07T17:28:03Z) - Localized Debiased Machine Learning: Efficient Inference on Quantile
Treatment Effects and Beyond [69.83813153444115]
We consider an efficient estimating equation for the (local) quantile treatment effect ((L)QTE) in causal inference.
Debiased machine learning (DML) is a data-splitting approach to estimating high-dimensional nuisances.
We propose localized debiased machine learning (LDML), which avoids this burdensome step.
arXiv Detail & Related papers (2019-12-30T14:42:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.