Federated Smoothing ADMM for Localization
- URL: http://arxiv.org/abs/2503.09497v1
- Date: Wed, 12 Mar 2025 16:01:34 GMT
- Title: Federated Smoothing ADMM for Localization
- Authors: Reza Mirzaeifard, Ashkan Moradi, Masahiro Yukawa, Stefan Werner,
- Abstract summary: Federated systems are characterized by distributed data, non-smoothity, and non-smoothness.<n>We propose a robust algorithm to tackle the scalability and outlier issues inherent in such environments.<n>To validate the reliability of the proposed algorithm, we show that it converges to a stationary point.<n> numerical simulations highlight its superior performance in convergence resilience compared to existing state-of-the-art localization methods.
- Score: 9.25126455172971
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper addresses the challenge of localization in federated settings, which are characterized by distributed data, non-convexity, and non-smoothness. To tackle the scalability and outlier issues inherent in such environments, we propose a robust algorithm that employs an $\ell_1$-norm formulation within a novel federated ADMM framework. This approach addresses the problem by integrating an iterative smooth approximation for the total variation consensus term and employing a Moreau envelope approximation for the convex function that appears in a subtracted form. This transformation ensures that the problem is smooth and weakly convex in each iteration, which results in enhanced computational efficiency and improved estimation accuracy. The proposed algorithm supports asynchronous updates and multiple client updates per iteration, which ensures its adaptability to real-world federated systems. To validate the reliability of the proposed algorithm, we show that the method converges to a stationary point, and numerical simulations highlight its superior performance in convergence speed and outlier resilience compared to existing state-of-the-art localization methods.
Related papers
- Stochastic Optimization with Optimal Importance Sampling [49.484190237840714]
We propose an iterative-based algorithm that jointly updates the decision and the IS distribution without requiring time-scale separation between the two.
Our method achieves the lowest possible variable variance and guarantees global convergence under convexity of the objective and mild assumptions on the IS distribution family.
arXiv Detail & Related papers (2025-04-04T16:10:18Z) - Communication-Efficient Stochastic Distributed Learning [3.2923780772605595]
We address distributed learning problems, both non and convex, undirected networks.
In particular, we design a novel based on the distributed Alternating Method of Multipliers (MM) to address the challenges of high communication costs.
arXiv Detail & Related papers (2025-01-23T10:05:23Z) - A unified consensus-based parallel ADMM algorithm for high-dimensional
regression with combined regularizations [3.280169909938912]
parallel alternating multipliers (ADMM) is widely recognized for its effectiveness in handling large-scale distributed datasets.
The proposed algorithms serve to demonstrate the reliability, stability, and scalability of a financial example.
arXiv Detail & Related papers (2023-11-21T03:30:38Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Robust Networked Federated Learning for Localization [7.332862402432447]
This paper addresses the problem of approximation, which is non-smooth in a federated setting where the data is distributed across a multitude of devices.
We propose a method that adopts an $L_$-norm that a robust formulation within a distributed subgradient framework, explicitly designed to handle these obstacles.
arXiv Detail & Related papers (2023-08-31T13:54:37Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Mobilizing Personalized Federated Learning in Infrastructure-Less and
Heterogeneous Environments via Random Walk Stochastic ADMM [0.14597673707346284]
This paper explores the challenges of implementing Federated Learning (FL) in practical scenarios featuring isolated nodes with data heterogeneity.
To overcome these challenges, we propose a novel mobilizing personalized FL approach, which aims to facilitate mobility and resilience.
We develop a novel optimization algorithm called Random Walk Alternating Direction Method of Multipliers (RWSADMM)
arXiv Detail & Related papers (2023-04-25T03:00:18Z) - A robust method for reliability updating with equality information using
sequential adaptive importance sampling [8.254850675268957]
Reliability updating refers to a problem that integrates Bayesian updating technique with structural reliability analysis.
This paper proposes an innovative method called RU-SAIS, which combines elements of sequential importance sampling and K-means clustering.
Results show that RU-SAIS achieves a more accurate and robust estimator of the posterior failure probability than the existing methods.
arXiv Detail & Related papers (2023-03-08T12:55:40Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - Tackling the Objective Inconsistency Problem in Heterogeneous Federated
Optimization [93.78811018928583]
This paper provides a framework to analyze the convergence of federated heterogeneous optimization algorithms.
We propose FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.
arXiv Detail & Related papers (2020-07-15T05:01:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.