KENN: Enhancing Deep Neural Networks by Leveraging Knowledge for Time
Series Forecasting
- URL: http://arxiv.org/abs/2202.03903v2
- Date: Wed, 9 Feb 2022 11:31:34 GMT
- Title: KENN: Enhancing Deep Neural Networks by Leveraging Knowledge for Time
Series Forecasting
- Authors: Muhammad Ali Chattha, Ludger van Elst, Muhammad Imran Malik, Andreas
Dengel, Sheraz Ahmed
- Abstract summary: We propose a novel knowledge fusion architecture, Knowledge Enhanced Neural Network (KENN), for time series forecasting.
We show that KENN not only reduces data dependency of the overall framework but also improves performance by producing predictions that are better than the ones produced by purely knowledge and data driven domains.
- Score: 6.652753636450873
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: End-to-end data-driven machine learning methods often have exuberant
requirements in terms of quality and quantity of training data which are often
impractical to fulfill in real-world applications. This is specifically true in
time series domain where problems like disaster prediction, anomaly detection,
and demand prediction often do not have a large amount of historical data.
Moreover, relying purely on past examples for training can be sub-optimal since
in doing so we ignore one very important domain i.e knowledge, which has its
own distinct advantages. In this paper, we propose a novel knowledge fusion
architecture, Knowledge Enhanced Neural Network (KENN), for time series
forecasting that specifically aims towards combining strengths of both
knowledge and data domains while mitigating their individual weaknesses. We
show that KENN not only reduces data dependency of the overall framework but
also improves performance by producing predictions that are better than the
ones produced by purely knowledge and data driven domains. We also compare KENN
with state-of-the-art forecasting methods and show that predictions produced by
KENN are significantly better even when trained on only 50\% of the data.
Related papers
- An Investigation on Machine Learning Predictive Accuracy Improvement and Uncertainty Reduction using VAE-based Data Augmentation [2.517043342442487]
Deep generative learning uses certain ML models to learn the underlying distribution of existing data and generate synthetic samples that resemble the real data.
In this study, our objective is to evaluate the effectiveness of data augmentation using variational autoencoder (VAE)-based deep generative models.
We investigated whether the data augmentation leads to improved accuracy in the predictions of a deep neural network (DNN) model trained using the augmented data.
arXiv Detail & Related papers (2024-10-24T18:15:48Z) - F-FOMAML: GNN-Enhanced Meta-Learning for Peak Period Demand Forecasting with Proxy Data [65.6499834212641]
We formulate the demand prediction as a meta-learning problem and develop the Feature-based First-Order Model-Agnostic Meta-Learning (F-FOMAML) algorithm.
By considering domain similarities through task-specific metadata, our model improved generalization, where the excess risk decreases as the number of training tasks increases.
Compared to existing state-of-the-art models, our method demonstrates a notable improvement in demand prediction accuracy, reducing the Mean Absolute Error by 26.24% on an internal vending machine dataset and by 1.04% on the publicly accessible JD.com dataset.
arXiv Detail & Related papers (2024-06-23T21:28:50Z) - Bayesian Neural Networks with Domain Knowledge Priors [52.80929437592308]
We propose a framework for integrating general forms of domain knowledge into a BNN prior.
We show that BNNs using our proposed domain knowledge priors outperform those with standard priors.
arXiv Detail & Related papers (2024-02-20T22:34:53Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Augmented Bilinear Network for Incremental Multi-Stock Time-Series
Classification [83.23129279407271]
We propose a method to efficiently retain the knowledge available in a neural network pre-trained on a set of securities.
In our method, the prior knowledge encoded in a pre-trained neural network is maintained by keeping existing connections fixed.
This knowledge is adjusted for the new securities by a set of augmented connections, which are optimized using the new data.
arXiv Detail & Related papers (2022-07-23T18:54:10Z) - Improving Neural Networks for Time Series Forecasting using Data
Augmentation and AutoML [0.0]
This paper presents an easy to implement data augmentation method to significantly improve the performance of neural networks.
It shows that data augmentation, when paired Automated Machine Learning techniques such as Neural Architecture Search, can help to find the best neural architecture for a given time series.
arXiv Detail & Related papers (2021-03-02T19:20:49Z) - Domain Knowledge Empowered Structured Neural Net for End-to-End Event
Temporal Relation Extraction [44.95973272921582]
We propose a framework that enhances deep neural network with distributional constraints constructed by probabilistic domain knowledge.
We solve the constrained inference problem via Lagrangian Relaxation and apply it on end-to-end event temporal relation extraction tasks.
arXiv Detail & Related papers (2020-09-15T22:20:27Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z) - A Survey on Knowledge integration techniques with Artificial Neural
Networks for seq-2-seq/time series models [0.456877715768796]
Deep neural networks have enabled the exploration of uncharted areas in several domains.
But they under-perform due to insufficient data, poor data quality, data that might not be covering the domain broadly.
This paper explores techniques to integrate expert knowledge to the Deep Neural Networks for sequence-to-sequence and time series models.
arXiv Detail & Related papers (2020-08-13T15:40:38Z) - Temporal Calibrated Regularization for Robust Noisy Label Learning [60.90967240168525]
Deep neural networks (DNNs) exhibit great success on many tasks with the help of large-scale well annotated datasets.
However, labeling large-scale data can be very costly and error-prone so that it is difficult to guarantee the annotation quality.
We propose a Temporal Calibrated Regularization (TCR) in which we utilize the original labels and the predictions in the previous epoch together.
arXiv Detail & Related papers (2020-07-01T04:48:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.