An autoencoder wavelet based deep neural network with attention
mechanism for multistep prediction of plant growth
- URL: http://arxiv.org/abs/2012.04041v1
- Date: Mon, 7 Dec 2020 20:30:39 GMT
- Title: An autoencoder wavelet based deep neural network with attention
mechanism for multistep prediction of plant growth
- Authors: Bashar Alhnaity, Stefanos Kollias, Georgios Leontidis, Shouyong Jiang,
Bert Schamp, Simon Pearson
- Abstract summary: This paper presents a novel approach for predicting plant growth in agriculture, focusing on prediction of plant Stem Diameter Variations (SDV)
Wavelet decomposition is applied to the original data, as to facilitate model fitting and reduce noise in them.
An encoder-decoder framework is developed using Long Short Term Memory (LSTM) and used for appropriate feature extraction from the data.
A recurrent neural network including LSTM and an attention mechanism is proposed for modelling long-term dependencies in the time series data.
- Score: 4.077787659104315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-step prediction is considered of major significance for time series
analysis in many real life problems. Existing methods mainly focus on
one-step-ahead forecasting, since multiple step forecasting generally fails due
to accumulation of prediction errors. This paper presents a novel approach for
predicting plant growth in agriculture, focusing on prediction of plant Stem
Diameter Variations (SDV). The proposed approach consists of three main steps.
At first, wavelet decomposition is applied to the original data, as to
facilitate model fitting and reduce noise in them. Then an encoder-decoder
framework is developed using Long Short Term Memory (LSTM) and used for
appropriate feature extraction from the data. Finally, a recurrent neural
network including LSTM and an attention mechanism is proposed for modelling
long-term dependencies in the time series data. Experimental results are
presented which illustrate the good performance of the proposed approach and
that it significantly outperforms the existing models, in terms of error
criteria such as RMSE, MAE and MAPE.
Related papers
- MGCP: A Multi-Grained Correlation based Prediction Network for Multivariate Time Series [54.91026286579748]
We propose a Multi-Grained Correlations-based Prediction Network.
It simultaneously considers correlations at three levels to enhance prediction performance.
It employs adversarial training with an attention mechanism-based predictor and conditional discriminator to optimize prediction results at coarse-grained level.
arXiv Detail & Related papers (2024-05-30T03:32:44Z) - Embedded feature selection in LSTM networks with multi-objective
evolutionary ensemble learning for time series forecasting [49.1574468325115]
We present a novel feature selection method embedded in Long Short-Term Memory networks.
Our approach optimize the weights and biases of the LSTM in a partitioned manner.
Experimental evaluations on air quality time series data from Italy and southeast Spain demonstrate that our method substantially improves the ability generalization of conventional LSTMs.
arXiv Detail & Related papers (2023-12-29T08:42:10Z) - Perceiver-based CDF Modeling for Time Series Forecasting [25.26713741799865]
We propose a new architecture, called perceiver-CDF, for modeling cumulative distribution functions (CDF) of time series data.
Our approach combines the perceiver architecture with a copula-based attention mechanism tailored for multimodal time series prediction.
Experiments on the unimodal and multimodal benchmarks consistently demonstrate a 20% improvement over state-of-the-art methods.
arXiv Detail & Related papers (2023-10-03T01:13:17Z) - Long-term drought prediction using deep neural networks based on geospatial weather data [75.38539438000072]
High-quality drought forecasting up to a year in advance is critical for agriculture planning and insurance.
We tackle drought data by introducing an end-to-end approach that adopts a systematic end-to-end approach.
Key findings are the exceptional performance of a Transformer model, EarthFormer, in making accurate short-term (up to six months) forecasts.
arXiv Detail & Related papers (2023-09-12T13:28:06Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv Detail & Related papers (2023-01-08T12:20:46Z) - An Attention Free Long Short-Term Memory for Time Series Forecasting [0.0]
We focused on time series forecasting using attention free mechanism, a more efficient framework, and proposed a new architecture for time series prediction.
We proposed an architecture built using attention free LSTM layers that overcome linear models for conditional variance prediction.
arXiv Detail & Related papers (2022-09-20T08:23:49Z) - Probabilistic AutoRegressive Neural Networks for Accurate Long-range
Forecasting [6.295157260756792]
We introduce the Probabilistic AutoRegressive Neural Networks (PARNN)
PARNN is capable of handling complex time series data exhibiting non-stationarity, nonlinearity, non-seasonality, long-range dependence, and chaotic patterns.
We evaluate the performance of PARNN against standard statistical, machine learning, and deep learning models, including Transformers, NBeats, and DeepAR.
arXiv Detail & Related papers (2022-04-01T17:57:36Z) - Meta-Forecasting by combining Global DeepRepresentations with Local
Adaptation [12.747008878068314]
We introduce a novel forecasting method called Meta Global-Local Auto-Regression (Meta-GLAR)
It adapts to each time series by learning in closed-form the mapping from the representations produced by a recurrent neural network (RNN) to one-step-ahead forecasts.
Our method is competitive with the state-of-the-art in out-of-sample forecasting accuracy reported in earlier work.
arXiv Detail & Related papers (2021-11-05T11:45:02Z) - Feature-weighted Stacking for Nonseasonal Time Series Forecasts: A Case
Study of the COVID-19 Epidemic Curves [0.0]
We investigate ensembling techniques in forecasting and examine their potential for use in nonseasonal time-series.
We propose using late data fusion, using a stacked ensemble of two forecasting models and two meta-features that prove their predictive power during a preliminary forecasting stage.
arXiv Detail & Related papers (2021-08-19T14:44:46Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.