STRONG: Synchronous and asynchronous RObust Network localization, under
Non-Gaussian noise
- URL: http://arxiv.org/abs/2110.00594v1
- Date: Fri, 1 Oct 2021 18:01:28 GMT
- Title: STRONG: Synchronous and asynchronous RObust Network localization, under
Non-Gaussian noise
- Authors: Claudia Soares, Jo\~ao Gomes
- Abstract summary: Real-world network applications must cope with failing nodes, malicious attacks and data classified as outliers.
Our work addresses these concerns in the scope of the sensor network localization algorithms.
A major highlight of our contribution lies on the fact that we pay no price for provable distributed neither in accuracy, nor in communication cost or speed.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-world network applications must cope with failing nodes, malicious
attacks, or nodes facing corrupted data - data classified as outliers. Our work
addresses these concerns in the scope of the sensor network localization
problem where, despite the abundance of technical literature, prior research
seldom considered outlier data. We propose robust, fast, and distributed
network localization algorithms, resilient to high-power noise, but also
precise under regular Gaussian noise. We use a Huber M-estimator, thus
obtaining a robust (but nonconvex) optimization problem. We convexify and
change the problem representation, to allow for distributed robust localization
algorithms: a synchronous distributed method that has optimal convergence rate
and an asynchronous one with proven convergence guarantees. A major highlight
of our contribution lies on the fact that we pay no price for provable
distributed computation neither in accuracy, nor in communication cost or
convergence speed. Simulations showcase the superior performance of our
algorithms, both in the presence of outliers and under regular Gaussian noise:
our method exceeds the accuracy of alternative approaches, distributed and
centralized, even under heavy additive and multiplicative outlier noise.
Related papers
- A quasi-Bayesian sequential approach to deconvolution density estimation [7.10052009802944]
Density deconvolution addresses the estimation of the unknown density function $f$ of a random signal from data.
We consider the problem of density deconvolution in a streaming or online setting where noisy data arrive progressively.
By relying on a quasi-Bayesian sequential approach, we obtain estimates of $f$ that are of easy evaluation.
arXiv Detail & Related papers (2024-08-26T16:40:04Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Optimizing the Noise in Self-Supervised Learning: from Importance
Sampling to Noise-Contrastive Estimation [80.07065346699005]
It is widely assumed that the optimal noise distribution should be made equal to the data distribution, as in Generative Adversarial Networks (GANs)
We turn to Noise-Contrastive Estimation which grounds this self-supervised task as an estimation problem of an energy-based model of the data.
We soberly conclude that the optimal noise may be hard to sample from, and the gain in efficiency can be modest compared to choosing the noise distribution equal to the data's.
arXiv Detail & Related papers (2023-01-23T19:57:58Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD
for Communication Efficient Nonconvex Distributed Learning [58.79085525115987]
Local methods are one of the promising approaches to reduce communication time.
We show that the communication complexity is better than non-local methods when the local datasets is smaller than the smoothness local loss.
arXiv Detail & Related papers (2022-02-12T15:12:17Z) - On Convergence of Federated Averaging Langevin Dynamics [22.013125418713763]
We propose a federated averaging Langevin algorithm (FA-LD) for uncertainty quantification and mean predictions with distributed clients.
We develop theoretical guarantees for FA-LD for strongly log-con distributions with non-icaved data.
We show convergence results based on different averaging schemes where only partial device updates are available.
arXiv Detail & Related papers (2021-12-09T18:54:29Z) - Acceleration in Distributed Optimization Under Similarity [72.54787082152278]
We study distributed (strongly convex) optimization problems over a network of agents, with no centralized nodes.
An $varepsilon$-solution is achieved in $tildemathcalrhoObig(sqrtfracbeta/mu (1-)log1/varepsilonbig)$ number of communications steps.
This rate matches (up to poly-log factors) for the first time lower complexity communication bounds of distributed gossip-algorithms applied to the class of problems of interest.
arXiv Detail & Related papers (2021-10-24T04:03:00Z) - On Accelerating Distributed Convex Optimizations [0.0]
This paper studies a distributed multi-agent convex optimization problem.
We show that the proposed algorithm converges linearly with an improved rate of convergence than the traditional and adaptive gradient-descent methods.
We demonstrate our algorithm's superior performance compared to prominent distributed algorithms for solving real logistic regression problems.
arXiv Detail & Related papers (2021-08-19T13:19:54Z) - Kernel k-Means, By All Means: Algorithms and Strong Consistency [21.013169939337583]
Kernel $k$ clustering is a powerful tool for unsupervised learning of non-linear data.
In this paper, we generalize results leveraging a general family of means to combat sub-optimal local solutions.
Our algorithm makes use of majorization-minimization (MM) to better solve this non-linear separation problem.
arXiv Detail & Related papers (2020-11-12T16:07:18Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.