Federated Learning for Sparse Principal Component Analysis
- URL: http://arxiv.org/abs/2311.08677v1
- Date: Wed, 15 Nov 2023 03:55:28 GMT
- Title: Federated Learning for Sparse Principal Component Analysis
- Authors: Sin Cheng Ciou, Pin Jui Chen, Elvin Y. Tseng and Yuh-Jye Lee
- Abstract summary: Federated learning is a decentralized approach where model training occurs on client sides, preserving privacy by keeping data localized.
We apply this framework to Sparse Principal Component Analysis (SPCA) in this work.
SPCA aims to attain sparse component loadings while maximizing data variance for improved interpretability.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the rapidly evolving realm of machine learning, algorithm effectiveness
often faces limitations due to data quality and availability. Traditional
approaches grapple with data sharing due to legal and privacy concerns. The
federated learning framework addresses this challenge. Federated learning is a
decentralized approach where model training occurs on client sides, preserving
privacy by keeping data localized. Instead of sending raw data to a central
server, only model updates are exchanged, enhancing data security. We apply
this framework to Sparse Principal Component Analysis (SPCA) in this work. SPCA
aims to attain sparse component loadings while maximizing data variance for
improved interpretability. Beside the L1 norm regularization term in
conventional SPCA, we add a smoothing function to facilitate gradient-based
optimization methods. Moreover, in order to improve computational efficiency,
we introduce a least squares approximation to original SPCA. This enables
analytic solutions on the optimization processes, leading to substantial
computational improvements. Within the federated framework, we formulate SPCA
as a consensus optimization problem, which can be solved using the Alternating
Direction Method of Multipliers (ADMM). Our extensive experiments involve both
IID and non-IID random features across various data owners. Results on
synthetic and public datasets affirm the efficacy of our federated SPCA
approach.
Related papers
- Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - Aiding Global Convergence in Federated Learning via Local Perturbation and Mutual Similarity Information [6.767885381740953]
Federated learning has emerged as a distributed optimization paradigm.
We propose a novel modified framework wherein each client locally performs a perturbed gradient step.
We show that our algorithm speeds convergence up to a margin of 30 global rounds compared with FedAvg.
arXiv Detail & Related papers (2024-10-07T23:14:05Z) - FedCAda: Adaptive Client-Side Optimization for Accelerated and Stable Federated Learning [57.38427653043984]
Federated learning (FL) has emerged as a prominent approach for collaborative training of machine learning models across distributed clients.
We introduce FedCAda, an innovative federated client adaptive algorithm designed to tackle this challenge.
We demonstrate that FedCAda outperforms the state-of-the-art methods in terms of adaptability, convergence, stability, and overall performance.
arXiv Detail & Related papers (2024-05-20T06:12:33Z) - Indirectly Parameterized Concrete Autoencoders [40.35109085799772]
Recent developments in neural network-based embedded feature selection show promising results across a wide range of applications.
Recent developments in neural network-based embedded feature selection show promising results across a wide range of applications.
arXiv Detail & Related papers (2024-03-01T14:41:51Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Serverless Federated AUPRC Optimization for Multi-Party Collaborative
Imbalanced Data Mining [119.89373423433804]
Area Under Precision-Recall (AUPRC) was introduced as an effective metric.
Serverless multi-party collaborative training can cut down the communications cost by avoiding the server node bottleneck.
We propose a new ServerLess biAsed sTochastic gradiEnt (SLATE) algorithm to directly optimize the AUPRC.
arXiv Detail & Related papers (2023-08-06T06:51:32Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Sample-based and Feature-based Federated Learning via Mini-batch SSCA [18.11773963976481]
This paper investigates sample-based and feature-based federated optimization.
We show that the proposed algorithms can preserve data privacy through the model aggregation mechanism.
We also show that the proposed algorithms converge to Karush-Kuhn-Tucker points of the respective federated optimization problems.
arXiv Detail & Related papers (2021-04-13T08:23:46Z) - Improving Federated Relational Data Modeling via Basis Alignment and
Weight Penalty [18.096788806121754]
Federated learning (FL) has attracted increasing attention in recent years.
We present a modified version of the graph neural network algorithm that performs federated modeling over Knowledge Graph (KG)
We propose a novel optimization algorithm, named FedAlign, with 1) optimal transportation (OT) for on-client personalization and 2) weight constraint to speed up the convergence.
Empirical results show that our proposed method outperforms the state-of-the-art FL methods, such as FedAVG and FedProx, with better convergence.
arXiv Detail & Related papers (2020-11-23T12:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.