AutoAI-TS: AutoAI for Time Series Forecasting
- URL: http://arxiv.org/abs/2102.12347v1
- Date: Wed, 24 Feb 2021 15:30:54 GMT
- Title: AutoAI-TS: AutoAI for Time Series Forecasting
- Authors: Syed Yousaf Shah, Dhaval Patel, Long Vu, Xuan-Hong Dang, Bei Chen,
Peter Kirchner, Horst Samulowitz, David Wood, Gregory Bramble, Wesley M.
Gifford, Giridhar Ganapavarapu, Roman Vaculin and Petros Zerfos
- Abstract summary: We present AutoAI for Time Series Forecasting (AutoAI-TS) that provides users with a zero configuration (zero-conf) system.
AutoAI-TS automatically performs all the data preparation, model creation, parameter optimization, training and model selection for users.
It then evaluates and ranks pipelines using the proposed T-Daub mechanism to choose the best pipeline.
- Score: 14.078195334596494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A large number of time series forecasting models including traditional
statistical models, machine learning models and more recently deep learning
have been proposed in the literature. However, choosing the right model along
with good parameter values that performs well on a given data is still
challenging. Automatically providing a good set of models to users for a given
dataset saves both time and effort from using trial-and-error approaches with a
wide variety of available models along with parameter optimization. We present
AutoAI for Time Series Forecasting (AutoAI-TS) that provides users with a zero
configuration (zero-conf ) system to efficiently train, optimize and choose
best forecasting model among various classes of models for the given dataset.
With its flexible zero-conf design, AutoAI-TS automatically performs all the
data preparation, model creation, parameter optimization, training and model
selection for users and provides a trained model that is ready to use. For
given data, AutoAI-TS utilizes a wide variety of models including classical
statistical models, Machine Learning (ML) models, statistical-ML hybrid models
and deep learning models along with various transformations to create
forecasting pipelines. It then evaluates and ranks pipelines using the proposed
T-Daub mechanism to choose the best pipeline. The paper describe in detail all
the technical aspects of AutoAI-TS along with extensive benchmarking on a
variety of real world data sets for various use-cases. Benchmark results show
that AutoAI-TS, with no manual configuration from the user, automatically
trains and selects pipelines that on average outperform existing
state-of-the-art time series forecasting toolkits.
Related papers
- AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving [68.73885845181242]
We propose an Automatic Data Engine (AIDE) that automatically identifies issues, efficiently curates data, improves the model through auto-labeling, and verifies the model through generation of diverse scenarios.
We further establish a benchmark for open-world detection on AV datasets to comprehensively evaluate various learning paradigms, demonstrating our method's superior performance at a reduced cost.
arXiv Detail & Related papers (2024-03-26T04:27:56Z) - AutoFT: Learning an Objective for Robust Fine-Tuning [60.641186718253735]
Foundation models encode rich representations that can be adapted to downstream tasks by fine-tuning.
Current approaches to robust fine-tuning use hand-crafted regularization techniques.
We propose AutoFT, a data-driven approach for robust fine-tuning.
arXiv Detail & Related papers (2024-01-18T18:58:49Z) - AutoXPCR: Automated Multi-Objective Model Selection for Time Series
Forecasting [1.0515439489916734]
We propose AutoXPCR - a novel method for automated and explainable multi-objective model selection.
Our approach leverages meta-learning to estimate any model's performance along PCR criteria, which encompass (P)redictive error, (C)omplexity, and (R)esource demand.
Our method clearly outperforms other model selection approaches - on average, it only requires 20% of computation costs for recommending models with 90% of the best-possible quality.
arXiv Detail & Related papers (2023-12-20T14:04:57Z) - auto-sktime: Automated Time Series Forecasting [18.640815949661903]
We introduce auto-sktime, a novel framework for automated time series forecasting.
The proposed framework uses the power of automated machine learning (AutoML) techniques to automate the creation of the entire forecasting pipeline.
Experimental results on 64 diverse real-world time series datasets demonstrate the effectiveness and efficiency of the framework.
arXiv Detail & Related papers (2023-12-13T21:34:30Z) - Unified Long-Term Time-Series Forecasting Benchmark [0.6526824510982802]
We present a comprehensive dataset designed explicitly for long-term time-series forecasting.
We incorporate a collection of datasets obtained from diverse, dynamic systems and real-life records.
To determine the most effective model in diverse scenarios, we conduct an extensive benchmarking analysis using classical and state-of-the-art models.
Our findings reveal intriguing performance comparisons among these models, highlighting the dataset-dependent nature of model effectiveness.
arXiv Detail & Related papers (2023-09-27T18:59:00Z) - Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How [62.467716468917224]
We propose a methodology that jointly searches for the optimal pretrained model and the hyperparameters for finetuning it.
Our method transfers knowledge about the performance of many pretrained models on a series of datasets.
We empirically demonstrate that our resulting approach can quickly select an accurate pretrained model for a new dataset.
arXiv Detail & Related papers (2023-06-06T16:15:26Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - It's the Best Only When It Fits You Most: Finding Related Models for
Serving Based on Dynamic Locality Sensitive Hashing [1.581913948762905]
Preparation of training data is often a bottleneck in the lifecycle of deploying a deep learning model for production or research.
This paper proposes an end-to-end process of searching related models for serving based on the similarity of the target dataset and the training datasets of the available models.
arXiv Detail & Related papers (2020-10-13T22:52:13Z) - Automatic deep learning for trend prediction in time series data [0.0]
Deep Neural Network (DNN) algorithms have been explored for predicting trends in time series data.
In many real world applications, time series data are captured from dynamic systems.
We show how a recent AutoML tool can be effectively used to automate the model development process.
arXiv Detail & Related papers (2020-09-17T19:47:05Z) - Fast, Accurate, and Simple Models for Tabular Data via Augmented
Distillation [97.42894942391575]
We propose FAST-DAD to distill arbitrarily complex ensemble predictors into individual models like boosted trees, random forests, and deep networks.
Our individual distilled models are over 10x faster and more accurate than ensemble predictors produced by AutoML tools like H2O/AutoSklearn.
arXiv Detail & Related papers (2020-06-25T09:57:47Z) - AutoFIS: Automatic Feature Interaction Selection in Factorization Models
for Click-Through Rate Prediction [75.16836697734995]
We propose a two-stage algorithm called Automatic Feature Interaction Selection (AutoFIS)
AutoFIS can automatically identify important feature interactions for factorization models with computational cost just equivalent to training the target model to convergence.
AutoFIS has been deployed onto the training platform of Huawei App Store recommendation service.
arXiv Detail & Related papers (2020-03-25T06:53:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.