A Federated Data-Driven Evolutionary Algorithm
- URL: http://arxiv.org/abs/2102.08288v1
- Date: Tue, 16 Feb 2021 17:18:54 GMT
- Title: A Federated Data-Driven Evolutionary Algorithm
- Authors: Jinjin Xu, Yaochu Jin, Wenli Du, Sai Gu
- Abstract summary: Existing data-driven evolutionary optimization algorithms require that all data are centrally stored.
This paper proposes a federated data-driven evolutionary optimization framework that is able to perform data driven optimization when the data is distributed on multiple devices.
- Score: 10.609815608017065
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data-driven evolutionary optimization has witnessed great success in solving
complex real-world optimization problems. However, existing data-driven
optimization algorithms require that all data are centrally stored, which is
not always practical and may be vulnerable to privacy leakage and security
threats if the data must be collected from different devices. To address the
above issue, this paper proposes a federated data-driven evolutionary
optimization framework that is able to perform data driven optimization when
the data is distributed on multiple devices. On the basis of federated
learning, a sorted model aggregation method is developed for aggregating local
surrogates based on radial-basis-function networks. In addition, a federated
surrogate management strategy is suggested by designing an acquisition function
that takes into account the information of both the global and local surrogate
models. Empirical studies on a set of widely used benchmark functions in the
presence of various data distributions demonstrate the effectiveness of the
proposed framework.
Related papers
- Faster Convergence on Heterogeneous Federated Edge Learning: An Adaptive Clustered Data Sharing Approach [27.86468387141422]
Federated Edge Learning (FEEL) emerges as a pioneering distributed machine learning paradigm for the 6G Hyper-Connectivity.
Current FEEL algorithms struggle with non-independent and non-identically distributed (non-IID) data, leading to elevated communication costs and compromised model accuracy.
We introduce a clustered data sharing framework, mitigating data heterogeneity by selectively sharing partial data from cluster heads to trusted associates.
Experiments show that the proposed framework facilitates FEEL on non-IID datasets with faster convergence rate and higher model accuracy in a limited communication environment.
arXiv Detail & Related papers (2024-06-14T07:22:39Z) - Implicitly Guided Design with PropEn: Match your Data to Follow the Gradient [52.2669490431145]
PropEn is inspired by'matching', which enables implicit guidance without training a discriminator.
We show that training with a matched dataset approximates the gradient of the property of interest while remaining within the data distribution.
arXiv Detail & Related papers (2024-05-28T11:30:19Z) - Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Differentially Private Distributed Convex Optimization [0.0]
In distributed optimization, multiple agents cooperate to minimize a global objective function, expressed as a sum of local objectives.
Locally stored data are not shared with other agents, which could limit the practical usage of DO in applications with sensitive data.
We propose a privacy-preserving DO algorithm for constrained convex optimization models.
arXiv Detail & Related papers (2023-02-28T12:07:27Z) - Data augmentation through multivariate scenario forecasting in Data
Centers using Generative Adversarial Networks [0.18416014644193063]
The main challenge in achieving a global energy efficiency strategy based on Artificial Intelligence is that we need massive amounts of data to feed the algorithms.
This paper proposes a time-series data augmentation methodology based on synthetic scenario forecasting within the Data Center.
Our research will help to optimize the energy consumed in Data Centers, although the proposed methodology can be employed in any similar time-series-like problem.
arXiv Detail & Related papers (2022-01-12T15:09:10Z) - A Federated Data-Driven Evolutionary Algorithm for Expensive
Multi/Many-objective Optimization [11.92436948211501]
This paper proposes a federated data-driven evolutionary multi-objective/many-objective optimization algorithm.
We leverage federated learning for surrogate construction so that multiple clients collaboratively train a radial-basis-function-network as the global surrogate.
A new federated acquisition function is proposed for the central server to approximate the objective values using the global surrogate and estimate the uncertainty level of the approximated objective values.
arXiv Detail & Related papers (2021-06-22T22:33:24Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.