Blending gradient boosted trees and neural networks for point and
probabilistic forecasting of hierarchical time series
- URL: http://arxiv.org/abs/2310.13029v1
- Date: Thu, 19 Oct 2023 09:42:02 GMT
- Title: Blending gradient boosted trees and neural networks for point and
probabilistic forecasting of hierarchical time series
- Authors: Ioannis Nasios, Konstantinos Vogklis
- Abstract summary: We describe a blending methodology of machine learning models that belong to gradient boosted trees and neural networks families.
These principles were successfully applied in the recent M5 Competition on both Accuracy and Uncertainty tracks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper we tackle the problem of point and probabilistic forecasting by
describing a blending methodology of machine learning models that belong to
gradient boosted trees and neural networks families. These principles were
successfully applied in the recent M5 Competition on both Accuracy and
Uncertainty tracks. The keypoints of our methodology are: a) transform the task
to regression on sales for a single day b) information rich feature engineering
c) create a diverse set of state-of-the-art machine learning models and d)
carefully construct validation sets for model tuning. We argue that the
diversity of the machine learning models along with the careful selection of
validation examples, where the most important ingredients for the effectiveness
of our approach. Although forecasting data had an inherent hierarchy structure
(12 levels), none of our proposed solutions exploited that hierarchical scheme.
Using the proposed methodology, our team was ranked within the gold medal range
in both Accuracy and the Uncertainty track. Inference code along with already
trained models are available at
https://github.com/IoannisNasios/M5_Uncertainty_3rd_place
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - MUSO: Achieving Exact Machine Unlearning in Over-Parameterized Regimes [19.664090734076712]
Machine unlearning (MU) makes a well-trained model behave as if it had never been trained on specific data.
We propose an alternating optimization algorithm that unifies the tasks of unlearning and relabeling.
The algorithm's effectiveness, confirmed through numerical experiments, highlights its superior performance in unlearning across various scenarios.
arXiv Detail & Related papers (2024-10-11T06:17:17Z) - LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networks [52.46420522934253]
We introduce LoRA-Ensemble, a parameter-efficient deep ensemble method for self-attention networks.
By employing a single pre-trained self-attention network with weights shared across all members, we train member-specific low-rank matrices for the attention projections.
Our method exhibits superior calibration compared to explicit ensembles and achieves similar or better accuracy across various prediction tasks and datasets.
arXiv Detail & Related papers (2024-05-23T11:10:32Z) - ScatterUQ: Interactive Uncertainty Visualizations for Multiclass Deep Learning Problems [0.0]
ScatterUQ is an interactive system that provides targeted visualizations to allow users to better understand model performance in context-driven uncertainty settings.
We demonstrate the effectiveness of ScatterUQ to explain model uncertainty for a multiclass image classification on a distance-aware neural network trained on Fashion-MNIST.
Our results indicate that the ScatterUQ system should scale to arbitrary, multiclass datasets.
arXiv Detail & Related papers (2023-08-08T21:17:03Z) - Fairness Uncertainty Quantification: How certain are you that the model
is fair? [13.209748908186606]
In modern machine learning, Gradient Descent (SGD) type algorithms are almost always used as training algorithms implying that the learned model, and consequently, its fairness properties are random.
In this work we provide Confidence Interval (CI) for test unfairness when a group-fairness-aware, specifically, Disparate Impact (DI), and Disparate Mistreatment (DM) aware linear binary classifier is trained using online SGD-type algorithms.
arXiv Detail & Related papers (2023-04-27T04:07:58Z) - Bayesian graph convolutional neural networks via tempered MCMC [0.41998444721319217]
Deep learning models, such as convolutional neural networks, have long been applied to image and multi-media tasks.
More recently, there has been more attention to unstructured data that can be represented via graphs.
These types of data are often found in health and medicine, social networks, and research data repositories.
arXiv Detail & Related papers (2021-04-17T04:03:25Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Causality-aware counterfactual confounding adjustment for feature
representations learned by deep models [14.554818659491644]
Causal modeling has been recognized as a potential solution to many challenging problems in machine learning (ML)
We describe how a recently proposed counterfactual approach can still be used to deconfound the feature representations learned by deep neural network (DNN) models.
arXiv Detail & Related papers (2020-04-20T17:37:36Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.