A Global-Local Approximation Framework for Large-Scale Gaussian Process
Modeling
- URL: http://arxiv.org/abs/2305.10158v1
- Date: Wed, 17 May 2023 12:19:59 GMT
- Title: A Global-Local Approximation Framework for Large-Scale Gaussian Process
Modeling
- Authors: Akhil Vakayil and Roshan Joseph
- Abstract summary: We propose a novel framework for large-scale Gaussian process (GP) modeling.
We employ a combined global-local approach in building the approximation.
The performance of our framework, which we refer to as TwinGP, is on par or better than the state-of-the-art GP modeling methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this work, we propose a novel framework for large-scale Gaussian process
(GP) modeling. Contrary to the global, and local approximations proposed in the
literature to address the computational bottleneck with exact GP modeling, we
employ a combined global-local approach in building the approximation. Our
framework uses a subset-of-data approach where the subset is a union of a set
of global points designed to capture the global trend in the data, and a set of
local points specific to a given testing location to capture the local trend
around the testing location. The correlation function is also modeled as a
combination of a global, and a local kernel. The performance of our framework,
which we refer to as TwinGP, is on par or better than the state-of-the-art GP
modeling methods at a fraction of their computational cost.
Related papers
- Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - FedSoup: Improving Generalization and Personalization in Federated
Learning via Selective Model Interpolation [32.36334319329364]
Cross-silo federated learning (FL) enables the development of machine learning models on datasets distributed across data centers.
Recent research has found that current FL algorithms face a trade-off between local and global performance when confronted with distribution shifts.
We propose a novel federated model soup method to optimize the trade-off between local and global performance.
arXiv Detail & Related papers (2023-07-20T00:07:29Z) - $\texttt{FedBC}$: Calibrating Global and Local Models via Federated
Learning Beyond Consensus [66.62731854746856]
In federated learning (FL), the objective of collaboratively learning a global model through aggregation of model updates across devices tends to oppose the goal of personalization via local information.
In this work, we calibrate this tradeoff in a quantitative manner through a multi-criterion-based optimization.
We demonstrate that $texttFedBC$ balances the global and local model test accuracy metrics across a suite datasets.
arXiv Detail & Related papers (2022-06-22T02:42:04Z) - Federated and Generalized Person Re-identification through Domain and
Feature Hallucinating [88.77196261300699]
We study the problem of federated domain generalization (FedDG) for person re-identification (re-ID)
We propose a novel method, called "Domain and Feature Hallucinating (DFH)", to produce diverse features for learning generalized local and global models.
Our method achieves the state-of-the-art performance for FedDG on four large-scale re-ID benchmarks.
arXiv Detail & Related papers (2022-03-05T09:15:13Z) - An Entropy-guided Reinforced Partial Convolutional Network for Zero-Shot
Learning [77.72330187258498]
We propose a novel Entropy-guided Reinforced Partial Convolutional Network (ERPCNet)
ERPCNet extracts and aggregates localities based on semantic relevance and visual correlations without human-annotated regions.
It not only discovers global-cooperative localities dynamically but also converges faster for policy gradient optimization.
arXiv Detail & Related papers (2021-11-03T11:13:13Z) - Global Aggregation then Local Distribution for Scene Parsing [99.1095068574454]
We show that our approach can be modularized as an end-to-end trainable block and easily plugged into existing semantic segmentation networks.
Our approach allows us to build new state of the art on major semantic segmentation benchmarks including Cityscapes, ADE20K, Pascal Context, Camvid and COCO-stuff.
arXiv Detail & Related papers (2021-07-28T03:46:57Z) - Combined Global and Local Search for Optimization with Gaussian Process
Models [1.1602089225841632]
We introduce the Additive Global and Local GP (AGLGP) model in the optimization framework.
AGLGP is rooted in the inducing-points-based GP sparse approximations and is combined with independent local models in different regions.
It first divides the whole design space into disjoint local regions and identifies a promising region with the global model.
Next, a local model in the selected region is fit to guide detailed search within this region.
The algorithm then switches back to the global step when a good local solution is found.
arXiv Detail & Related papers (2021-07-07T13:40:37Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - Inter-domain Deep Gaussian Processes [45.28237107466283]
We propose an extension of inter-domain shallow GPs that combines the advantages of inter-domain and deep Gaussian processes (DGPs)
We demonstrate how to leverage existing approximate inference methods to perform simple and scalable approximate inference using inter-domain features in DGPs.
arXiv Detail & Related papers (2020-11-01T04:03:35Z) - Locally induced Gaussian processes for large-scale simulation
experiments [0.0]
We show how placement of inducing points and their multitude can be thwarted by pathologies.
Our proposed methodology hybridizes global inducing point and data subset-based local GP approximation.
We show that local inducing points extend their global and data-subset component parts on the accuracy--computational efficiency frontier.
arXiv Detail & Related papers (2020-08-28T21:37:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.