Principles and Algorithms for Forecasting Groups of Time Series:
Locality and Globality
- URL: http://arxiv.org/abs/2008.00444v3
- Date: Fri, 26 Mar 2021 23:34:48 GMT
- Title: Principles and Algorithms for Forecasting Groups of Time Series:
Locality and Globality
- Authors: Pablo Montero-Manso and Rob J Hyndman
- Abstract summary: We formalize the setting of forecasting a set of time series with local and global methods.
Global models can succeed in a wider range of problems than previously thought.
purposely naive algorithms derived from these principles result in superior accuracy.
- Score: 0.5076419064097732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Forecasting groups of time series is of increasing practical importance, e.g.
forecasting the demand for multiple products offered by a retailer or server
loads within a data center. The local approach to this problem considers each
time series separately and fits a function or model to each series. The global
approach fits a single function to all series. For groups of similar time
series, global methods outperform the more established local methods. However,
recent results show good performance of global models even in heterogeneous
datasets. This suggests a more general applicability of global methods,
potentially leading to more accurate tools and new scenarios to study.
Formalizing the setting of forecasting a set of time series with local and
global methods, we provide the following contributions:
1) Global methods are not more restrictive than local methods, both can
produce the same forecasts without any assumptions about similarity of the
series. Global models can succeed in a wider range of problems than previously
thought.
2) Basic generalization bounds for local and global algorithms. The
complexity of local methods grows with the size of the set while it remains
constant for global methods. In large datasets, a global algorithm can afford
to be quite complex and still benefit from better generalization. These bounds
serve to clarify and support recent experimental results in the field, and
guide the design of new algorithms. For the class of autoregressive models,
this implies that global models can have much larger memory than local methods.
3) In an extensive empirical study, purposely naive algorithms derived from
these principles, such as global linear models or deep networks result in
superior accuracy.
In particular, global linear models can provide competitive accuracy with two
orders of magnitude fewer parameters than local methods.
Related papers
- A Global-Local Approximation Framework for Large-Scale Gaussian Process
Modeling [0.0]
We propose a novel framework for large-scale Gaussian process (GP) modeling.
We employ a combined global-local approach in building the approximation.
The performance of our framework, which we refer to as TwinGP, is on par or better than the state-of-the-art GP modeling methods.
arXiv Detail & Related papers (2023-05-17T12:19:59Z) - $\texttt{FedBC}$: Calibrating Global and Local Models via Federated
Learning Beyond Consensus [66.62731854746856]
In federated learning (FL), the objective of collaboratively learning a global model through aggregation of model updates across devices tends to oppose the goal of personalization via local information.
In this work, we calibrate this tradeoff in a quantitative manner through a multi-criterion-based optimization.
We demonstrate that $texttFedBC$ balances the global and local model test accuracy metrics across a suite datasets.
arXiv Detail & Related papers (2022-06-22T02:42:04Z) - Federated and Generalized Person Re-identification through Domain and
Feature Hallucinating [88.77196261300699]
We study the problem of federated domain generalization (FedDG) for person re-identification (re-ID)
We propose a novel method, called "Domain and Feature Hallucinating (DFH)", to produce diverse features for learning generalized local and global models.
Our method achieves the state-of-the-art performance for FedDG on four large-scale re-ID benchmarks.
arXiv Detail & Related papers (2022-03-05T09:15:13Z) - Global Aggregation then Local Distribution for Scene Parsing [99.1095068574454]
We show that our approach can be modularized as an end-to-end trainable block and easily plugged into existing semantic segmentation networks.
Our approach allows us to build new state of the art on major semantic segmentation benchmarks including Cityscapes, ADE20K, Pascal Context, Camvid and COCO-stuff.
arXiv Detail & Related papers (2021-07-28T03:46:57Z) - Combined Global and Local Search for Optimization with Gaussian Process
Models [1.1602089225841632]
We introduce the Additive Global and Local GP (AGLGP) model in the optimization framework.
AGLGP is rooted in the inducing-points-based GP sparse approximations and is combined with independent local models in different regions.
It first divides the whole design space into disjoint local regions and identifies a promising region with the global model.
Next, a local model in the selected region is fit to guide detailed search within this region.
The algorithm then switches back to the global step when a good local solution is found.
arXiv Detail & Related papers (2021-07-07T13:40:37Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - NeuSE: A Neural Snapshot Ensemble Method for Collaborative Filtering [16.347327867397443]
In collaborative global snapshot filtering (CF) datasets, the optimal models are usually learned by globally minimizing the empirical risks over all the observed data.
In this paper, we show that the proposed method can significantly improve accuracy (up to 15.9%) when applied to a relatively existing collaborative methods.
arXiv Detail & Related papers (2021-04-15T06:43:40Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z) - Optimal Local Explainer Aggregation for Interpretable Prediction [12.934180951771596]
Key challenge for decision makers when incorporating black box machine learned models is being able to understand the predictions provided by these models.
One proposed method is training surrogate explainer models which approximate the more complex model.
We propose a novel local explainer algorithm based on information parameters.
arXiv Detail & Related papers (2020-03-20T19:02:11Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.