Accelerated Methods with Complexity Separation Under Data Similarity for Federated Learning Problems
- URL: http://arxiv.org/abs/2601.08614v1
- Date: Tue, 13 Jan 2026 14:59:22 GMT
- Title: Accelerated Methods with Complexity Separation Under Data Similarity for Federated Learning Problems
- Authors: Dmitry Bylinkin, Sergey Skorik, Dmitriy Bystrov, Leonid Berezin, Aram Avetisyan, Aleksandr Beznosikov,
- Abstract summary: Heterogeneity in data distribution poses a challenge in many modern federated learning tasks.<n>We formalize it as an optimization problem involving a computationally heavy composite under data similarity.<n>An optimal algorithm is proposed for the convex case.
- Score: 42.29257500741435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Heterogeneity within data distribution poses a challenge in many modern federated learning tasks. We formalize it as an optimization problem involving a computationally heavy composite under data similarity. By employing different sets of assumptions, we present several approaches to develop communication-efficient methods. An optimal algorithm is proposed for the convex case. The constructed theory is validated through a series of experiments across various problems.
Related papers
- Similarity-based fuzzy clustering scientific articles: potentials and challenges from mathematical and computational perspectives [0.0]
Fuzzy clustering plays a vital role in analyzing publication data.<n>This problem can be formulated as a constrained optimization model, where the goal is to minimize the discrepancy between the similarity observed from data and the similarity derived from a predicted distribution.<n>We analyze potentials and challenges of the approach from both mathematical and computational perspectives.
arXiv Detail & Related papers (2025-06-04T15:10:31Z) - Communication-Efficient Stochastic Distributed Learning [3.2923780772605595]
We address distributed learning problems, both non and convex, undirected networks.<n>In particular, we design a novel based on the distributed Alternating Method of Multipliers (MM) to address the challenges of high communication costs.
arXiv Detail & Related papers (2025-01-23T10:05:23Z) - Maximum likelihood inference for high-dimensional problems with multiaffine variable relations [2.4578723416255754]
In this paper, we consider inference problems where the variables are related by multiaffine expressions.
We propose a novel Alternating and Iteratively-Reweighted Least Squares (AIRLS) algorithm, and prove its convergence for problems with Generalized Normal Distributions.
arXiv Detail & Related papers (2024-09-05T13:07:31Z) - A Hierarchical Federated Learning Approach for the Internet of Things [3.28418927821443]
We present a novel federated learning solution, QHetFed, suitable for large-scale Internet of Things deployments.<n>We show that QHetFed consistently achieves high learning accuracy and significantly outperforms other hierarchical algorithms.
arXiv Detail & Related papers (2024-03-03T15:40:24Z) - A Comparative Analysis of Distributed Linear Solvers under Data Heterogeneity [9.248526557884498]
We consider the problem of solving a large-scale system of linear equations in a distributed or federated manner by a taskmaster and a set of machines.<n>We compare two well-known classes of algorithms used to solve this problem: projection-based methods and optimization-based methods.
arXiv Detail & Related papers (2023-04-20T20:48:00Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Compression and Data Similarity: Combination of Two Techniques for
Communication-Efficient Solving of Distributed Variational Inequalities [137.6408511310322]
In this paper we consider a combination of two popular approaches: compression and data similarity.
We show that this synergy can be more effective than each of the approaches separately in solving distributed smooth strongly monotonic variational inequalities.
arXiv Detail & Related papers (2022-06-19T16:38:56Z) - Partial Counterfactual Identification from Observational and
Experimental Data [83.798237968683]
We develop effective Monte Carlo algorithms to approximate the optimal bounds from an arbitrary combination of observational and experimental data.
Our algorithms are validated extensively on synthetic and real-world datasets.
arXiv Detail & Related papers (2021-10-12T02:21:30Z) - Algorithms for Solving Nonlinear Binary Optimization Problems in Robust
Causal Inference [2.169755083801688]
We propose greedy algorithms to solve the robust causal inference test instances from observational data with continuous outcomes.
By leveraging the structure of the feasibility formulation, we develop greedy schemes that are efficient in solving robust test problems.
arXiv Detail & Related papers (2020-12-22T16:12:11Z) - Isometric Multi-Shape Matching [50.86135294068138]
Finding correspondences between shapes is a fundamental problem in computer vision and graphics.
While isometries are often studied in shape correspondence problems, they have not been considered explicitly in the multi-matching setting.
We present a suitable optimisation algorithm for solving our formulation and provide a convergence and complexity analysis.
arXiv Detail & Related papers (2020-12-04T15:58:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.