Machine Learning Approach and Extreme Value Theory to Correlated
Stochastic Time Series with Application to Tree Ring Data
- URL: http://arxiv.org/abs/2301.11488v1
- Date: Fri, 27 Jan 2023 01:44:43 GMT
- Title: Machine Learning Approach and Extreme Value Theory to Correlated
Stochastic Time Series with Application to Tree Ring Data
- Authors: Omar Alzeley, Sadiah Aljeddani
- Abstract summary: Tree ring growth was used as an implementation in different aspects, for example, studying the history of buildings and environment.
The purpose of this paper is to use ML algorithms and Extreme Value Theory in order to analyse a set of tree ring widths data from nine trees growing in Nottinghamshire.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The main goal of machine learning (ML) is to study and improve mathematical
models which can be trained with data provided by the environment to infer the
future and to make decisions without necessarily having complete knowledge of
all influencing elements. In this work, we describe how ML can be a powerful
tool in studying climate modeling. Tree ring growth was used as an
implementation in different aspects, for example, studying the history of
buildings and environment. By growing and via the time, a new layer of wood to
beneath its bark by the tree. After years of growing, time series can be
applied via a sequence of tree ring widths. The purpose of this paper is to use
ML algorithms and Extreme Value Theory in order to analyse a set of tree ring
widths data from nine trees growing in Nottinghamshire. Initially, we start by
exploring the data through a variety of descriptive statistical approaches.
Transforming data is important at this stage to find out any problem in
modelling algorithm. We then use algorithm tuning and ensemble methods to
improve the k-nearest neighbors (KNN) algorithm. A comparison between the
developed method in this study ad other methods are applied. Also, extreme
value of the dataset will be more investigated. The results of the analysis
study show that the ML algorithms in the Random Forest method would give
accurate results in the analysis of tree ring widths data from nine trees
growing in Nottinghamshire with the lowest Root Mean Square Error value. Also,
we notice that as the assumed ARMA model parameters increased, the probability
of selecting the true model also increased. In terms of the Extreme Value
Theory, the Weibull distribution would be a good choice to model tree ring
data.
Related papers
- Tree-of-Traversals: A Zero-Shot Reasoning Algorithm for Augmenting Black-box Language Models with Knowledge Graphs [72.89652710634051]
Knowledge graphs (KGs) complement Large Language Models (LLMs) by providing reliable, structured, domain-specific, and up-to-date external knowledge.
We introduce Tree-of-Traversals, a novel zero-shot reasoning algorithm that enables augmentation of black-box LLMs with one or more KGs.
arXiv Detail & Related papers (2024-07-31T06:01:24Z) - SETAR-Tree: A Novel and Accurate Tree Algorithm for Global Time Series
Forecasting [7.206754802573034]
In this paper, we explore the close connections between TAR models and regression trees.
We introduce a new forecasting-specific tree algorithm that trains global Pooled Regression (PR) models in the leaves.
In our evaluation, the proposed tree and forest models are able to achieve significantly higher accuracy than a set of state-of-the-art tree-based algorithms.
arXiv Detail & Related papers (2022-11-16T04:30:42Z) - An Approximation Method for Fitted Random Forests [0.0]
We study methods that approximate each fitted tree in the Random Forests model using the multinomial allocation of the data points to the leafs.
Specifically, we begin by studying whether fitting a multinomial logistic regression helps reduce the size while preserving the prediction quality.
arXiv Detail & Related papers (2022-07-05T17:28:52Z) - Hierarchical Shrinkage: improving the accuracy and interpretability of
tree-based methods [10.289846887751079]
We introduce Hierarchical Shrinkage (HS), a post-hoc algorithm that does not modify the tree structure.
HS substantially increases the predictive performance of decision trees, even when used in conjunction with other regularization techniques.
All code and models are released in a full-fledged package available on Github.
arXiv Detail & Related papers (2022-02-02T02:43:23Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Active-LATHE: An Active Learning Algorithm for Boosting the Error
Exponent for Learning Homogeneous Ising Trees [75.93186954061943]
We design and analyze an algorithm that boosts the error exponent by at least 40% when $rho$ is at least $0.8$.
Our analysis hinges on judiciously exploiting the minute but detectable statistical variation of the samples to allocate more data to parts of the graph.
arXiv Detail & Related papers (2021-10-27T10:45:21Z) - Memory-Based Optimization Methods for Model-Agnostic Meta-Learning and
Personalized Federated Learning [56.17603785248675]
Model-agnostic meta-learning (MAML) has become a popular research area.
Existing MAML algorithms rely on the episode' idea by sampling a few tasks and data points to update the meta-model at each iteration.
This paper proposes memory-based algorithms for MAML that converge with vanishing error.
arXiv Detail & Related papers (2021-06-09T08:47:58Z) - Chow-Liu++: Optimal Prediction-Centric Learning of Tree Ising Models [30.62276301751904]
We introduce a new algorithm that combines elements of the Chow-Liu algorithm with tree metric reconstruction methods.
Our algorithm is robust to model misspecification and adversarial corruptions.
arXiv Detail & Related papers (2021-06-07T21:09:29Z) - Dive into Decision Trees and Forests: A Theoretical Demonstration [0.0]
Decision trees use the strategy of "divide-and-conquer" to divide a complex problem on the dependency between input features and labels into smaller ones.
Recent advances have greatly improved their performance in computational advertising, recommender system, information retrieval, etc.
arXiv Detail & Related papers (2021-01-20T16:47:59Z) - Growing Deep Forests Efficiently with Soft Routing and Learned
Connectivity [79.83903179393164]
This paper further extends the deep forest idea in several important aspects.
We employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., soft routing, rather than hard binary decisions.
Experiments on the MNIST dataset demonstrate that our empowered deep forests can achieve better or comparable performance than [1],[3].
arXiv Detail & Related papers (2020-12-29T18:05:05Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.