Single-Round Scalable Analytic Federated Learning
- URL: http://arxiv.org/abs/2512.03336v1
- Date: Wed, 03 Dec 2025 01:00:37 GMT
- Title: Single-Round Scalable Analytic Federated Learning
- Authors: Alan T. L. Bacellar, Mustafa Munir, Felipe M. G. França, Priscila M. V. Lima, Radu Marculescu, Lizy K. John,
- Abstract summary: SAFLe is a framework that achieves scalable non-linear expressivity by introducing a structured head of bucketed features and sparse, grouped embeddings.<n>We prove this non-linear architecture is mathematically equivalent to a high-dimensional linear regression.<n> Empirically, SAFLe establishes a new state-of-the-art for analytic FL, significantly outperforming both linear AFL and multi-round DeepAFL in accuracy across all benchmarks.
- Score: 20.7218411245201
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is plagued by two key challenges: high communication overhead and performance collapse on heterogeneous (non-IID) data. Analytic FL (AFL) provides a single-round, data distribution invariant solution, but is limited to linear models. Subsequent non-linear approaches, like DeepAFL, regain accuracy but sacrifice the single-round benefit. In this work, we break this trade-off. We propose SAFLe, a framework that achieves scalable non-linear expressivity by introducing a structured head of bucketed features and sparse, grouped embeddings. We prove this non-linear architecture is mathematically equivalent to a high-dimensional linear regression. This key equivalence allows SAFLe to be solved with AFL's single-shot, invariant aggregation law. Empirically, SAFLe establishes a new state-of-the-art for analytic FL, significantly outperforming both linear AFL and multi-round DeepAFL in accuracy across all benchmarks, demonstrating a highly efficient and scalable solution for federated vision.
Related papers
- DeepAFL: Deep Analytic Federated Learning [32.19650212973813]
Federated Learning (FL) is a popular distributed learning paradigm to break down data silo.<n>Traditional FL approaches largely rely on gradient-based updates.<n>We propose our Deep Analytic Federated Learning approach, named DeepAFL.
arXiv Detail & Related papers (2026-02-28T09:58:18Z) - FedQS: Optimizing Gradient and Model Aggregation for Semi-Asynchronous Federated Learning [8.906501632865908]
Federated learning (FL) enables collaborative model training across multiple parties without sharing raw data.<n>This paper presents FedQS, the first framework to theoretically analyze and address these disparities in SAFL.<n>Our work bridges the gap between aggregation strategies in SAFL, offering a unified solution for stable, accurate, and efficient federated learning.
arXiv Detail & Related papers (2025-10-09T01:32:19Z) - Analytic Personalized Federated Meta-Learning [15.1961498951975]
Analytic Federated Learning (AFL) is an enhanced gradient-free learning (FL) paradigm designed to accelerate training by updating the global model in a single step with closed-form least-square (LS) solutions.<n>We propose FedACnnwise framework in which a layerwise training method is designed by modeling each layer as an LS problem.<n>It generates a personalized model for each client by analytically solving a local objective which bridges the gap between the global model and the individual data degradation.
arXiv Detail & Related papers (2025-02-10T11:27:54Z) - Sketched Adaptive Federated Deep Learning: A Sharp Convergence Analysis [7.303912285452846]
We introduce specific sketched adaptive federated learning (SAFL) algorithms with guarantees on communication cost depending only logarithmically (instead of linearly) on the ambient dimension.<n>Our theoretical claims are supported by empirical studies on vision and language tasks, and in both fine-tuning and training-from-scratch.<n>Surprisingly, the proposed SAFL methods are competitive with the state-of-the-art communication-efficient federated learning algorithms based on error feedback.
arXiv Detail & Related papers (2024-11-11T07:51:22Z) - AFL: A Single-Round Analytic Approach for Federated Learning with Pre-trained Models [34.15482252496494]
We introduce analytic federated learning (AFL), a new training paradigm that brings analytical (i.e., closed-form) solutions to the federated learning (FL) with pre-trained models.<n>Our AFL draws inspiration from analytic learning -- a gradient-free technique that trains neural networks with analytical solutions in one epoch.
arXiv Detail & Related papers (2024-05-25T13:58:38Z) - Federated Learning Resilient to Byzantine Attacks and Data Heterogeneity [59.17297282373628]
This paper addresses Gradient learning (FL) in the context of malicious attacks on data.<n>We introduce a novel Average Robust Algorithm (RAGA) which uses the median for both convergence analysis and loss functions.
arXiv Detail & Related papers (2024-03-20T08:15:08Z) - DFedADMM: Dual Constraints Controlled Model Inconsistency for
Decentralized Federated Learning [52.83811558753284]
Decentralized learning (DFL) discards the central server and establishes a decentralized communication network.
Existing DFL methods still suffer from two major challenges: local inconsistency and local overfitting.
arXiv Detail & Related papers (2023-08-16T11:22:36Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - FeDXL: Provable Federated Learning for Deep X-Risk Optimization [105.17383135458897]
We tackle a novel federated learning (FL) problem for optimizing a family of X-risks, to which no existing algorithms are applicable.
The challenges for designing an FL algorithm for X-risks lie in the non-decomability of the objective over multiple machines and the interdependency between different machines.
arXiv Detail & Related papers (2022-10-26T00:23:36Z) - Sample-Efficient Reinforcement Learning Is Feasible for Linearly
Realizable MDPs with Limited Revisiting [60.98700344526674]
Low-complexity models such as linear function representation play a pivotal role in enabling sample-efficient reinforcement learning.
In this paper, we investigate a new sampling protocol, which draws samples in an online/exploratory fashion but allows one to backtrack and revisit previous states in a controlled and infrequent manner.
We develop an algorithm tailored to this setting, achieving a sample complexity that scales practicallyly with the feature dimension, the horizon, and the inverse sub-optimality gap, but not the size of the state/action space.
arXiv Detail & Related papers (2021-05-17T17:22:07Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - Multi-Objective Matrix Normalization for Fine-grained Visual Recognition [153.49014114484424]
Bilinear pooling achieves great success in fine-grained visual recognition (FGVC)
Recent methods have shown that the matrix power normalization can stabilize the second-order information in bilinear features.
We propose an efficient Multi-Objective Matrix Normalization (MOMN) method that can simultaneously normalize a bilinear representation.
arXiv Detail & Related papers (2020-03-30T08:40:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.