A Survey on Knowledge integration techniques with Artificial Neural
Networks for seq-2-seq/time series models
- URL: http://arxiv.org/abs/2008.05972v1
- Date: Thu, 13 Aug 2020 15:40:38 GMT
- Title: A Survey on Knowledge integration techniques with Artificial Neural
Networks for seq-2-seq/time series models
- Authors: Pramod Vadiraja and Muhammad Ali Chattha
- Abstract summary: Deep neural networks have enabled the exploration of uncharted areas in several domains.
But they under-perform due to insufficient data, poor data quality, data that might not be covering the domain broadly.
This paper explores techniques to integrate expert knowledge to the Deep Neural Networks for sequence-to-sequence and time series models.
- Score: 0.456877715768796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, with the advent of massive computational power and the
availability of huge amounts of data, Deep neural networks have enabled the
exploration of uncharted areas in several domains. But at times, they
under-perform due to insufficient data, poor data quality, data that might not
be covering the domain broadly, etc. Knowledge-based systems leverage expert
knowledge for making decisions and suitably take actions. Such systems retain
interpretability in the decision-making process. This paper focuses on
exploring techniques to integrate expert knowledge to the Deep Neural Networks
for sequence-to-sequence and time series models to improve their performance
and interpretability.
Related papers
- Graph Neural Network for spatiotemporal data: methods and applications [7.612070518526342]
Graph neural networks (GNNs) have emerged as a powerful tool for understanding data with dependencies to each other.
This article aims to provide an overview of the technologies and applications of GNNs in thetemporal domain.
arXiv Detail & Related papers (2023-05-30T02:27:17Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - A Time Series Approach to Explainability for Neural Nets with
Applications to Risk-Management and Fraud Detection [0.0]
Trust in technology is enabled by understanding the rationale behind the predictions made.
For cross-sectional data classical XAI approaches can lead to valuable insights about the models' inner workings.
We propose a novel XAI technique for deep learning methods which preserves and exploits the natural time ordering of the data.
arXiv Detail & Related papers (2022-12-06T12:04:01Z) - Neural Architecture Search for Dense Prediction Tasks in Computer Vision [74.9839082859151]
Deep learning has led to a rising demand for neural network architecture engineering.
neural architecture search (NAS) aims at automatically designing neural network architectures in a data-driven manner rather than manually.
NAS has become applicable to a much wider range of problems in computer vision.
arXiv Detail & Related papers (2022-02-15T08:06:50Z) - KENN: Enhancing Deep Neural Networks by Leveraging Knowledge for Time
Series Forecasting [6.652753636450873]
We propose a novel knowledge fusion architecture, Knowledge Enhanced Neural Network (KENN), for time series forecasting.
We show that KENN not only reduces data dependency of the overall framework but also improves performance by producing predictions that are better than the ones produced by purely knowledge and data driven domains.
arXiv Detail & Related papers (2022-02-08T14:47:47Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Blending Knowledge in Deep Recurrent Networks for Adverse Event
Prediction at Hospital Discharge [15.174501264797309]
We introduce a learning architecture that fuses a representation of patient data computed by a self-attention based recurrent neural network, with clinically relevant features.
We conduct extensive experiments on a large claims dataset and show that the blended method outperforms the standard machine learning approaches.
arXiv Detail & Related papers (2021-04-09T14:07:45Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Discovering long term dependencies in noisy time series data using deep
learning [0.0]
Time series modelling is essential for solving tasks such as predictive maintenance, quality control and optimisation.
Deep learning is widely used to solve such problems.
In this paper we develop framework for capturing and explaining temporal dependencies in time series data using deep neural networks.
arXiv Detail & Related papers (2020-11-15T15:10:57Z) - NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural
Network Synthesis [53.106414896248246]
We present a framework that allows analysts to effectively build the solution sub-graph space and guide the network search by injecting their domain knowledge.
Applying this technique in an iterative manner allows analysts to converge to the best performing neural network architecture for a given application.
arXiv Detail & Related papers (2020-09-28T01:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.