Adaptive Explainable Continual Learning Framework for Regression
Problems with Focus on Power Forecasts
- URL: http://arxiv.org/abs/2108.10781v1
- Date: Tue, 24 Aug 2021 14:59:10 GMT
- Title: Adaptive Explainable Continual Learning Framework for Regression
Problems with Focus on Power Forecasts
- Authors: Yujiang He
- Abstract summary: Two continual learning scenarios will be proposed to describe the potential challenges in this context.
Deep neural networks have to learn new tasks and overcome forgetting the knowledge obtained from the old tasks as the amount of data keeps increasing in applications.
Research topics are related but not limited to developing continual deep learning algorithms, strategies for non-stationarity detection in data streams, explainable and visualizable artificial intelligence, etc.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compared with traditional deep learning techniques, continual learning
enables deep neural networks to learn continually and adaptively. Deep neural
networks have to learn new tasks and overcome forgetting the knowledge obtained
from the old tasks as the amount of data keeps increasing in applications. In
this article, two continual learning scenarios will be proposed to describe the
potential challenges in this context. Besides, based on our previous work
regarding the CLeaR framework, which is short for continual learning for
regression tasks, the work will be further developed to enable models to extend
themselves and learn data successively. Research topics are related but not
limited to developing continual deep learning algorithms, strategies for
non-stationarity detection in data streams, explainable and visualizable
artificial intelligence, etc. Moreover, the framework- and algorithm-related
hyperparameters should be dynamically updated in applications. Forecasting
experiments will be conducted based on power generation and consumption data
collected from real-world applications. A series of comprehensive evaluation
metrics and visualization tools can help analyze the experimental results. The
proposed framework is expected to be generally applied to other constantly
changing scenarios.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - When Meta-Learning Meets Online and Continual Learning: A Survey [39.53836535326121]
meta-learning is a data-driven approach to optimize the learning algorithm.
Continual learning and online learning, both of which involve incrementally updating a model with streaming data.
This paper organizes various problem settings using consistent terminology and formal descriptions.
arXiv Detail & Related papers (2023-11-09T09:49:50Z) - Advancing continual lifelong learning in neural information retrieval: definition, dataset, framework, and empirical evaluation [3.2340528215722553]
A systematic task formulation of continual neural information retrieval is presented.
A comprehensive continual neural information retrieval framework is proposed.
Empirical evaluations illustrate that the proposed framework can successfully prevent catastrophic forgetting in neural information retrieval.
arXiv Detail & Related papers (2023-08-16T14:01:25Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Learning and Retrieval from Prior Data for Skill-based Imitation
Learning [47.59794569496233]
We develop a skill-based imitation learning framework that extracts temporally extended sensorimotor skills from prior data.
We identify several key design choices that significantly improve performance on novel tasks.
arXiv Detail & Related papers (2022-10-20T17:34:59Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - Design of Explainability Module with Experts in the Loop for
Visualization and Dynamic Adjustment of Continual Learning [5.039779583329608]
Continual learning can enable neural networks to evolve by learning new tasks sequentially in task-changing scenarios.
New novelties from the data stream in applications could contain anomalies that are meaningless for continual learning.
We propose the conceptual design of an explainability module with experts in the loop based on techniques, such as dimension reduction, visualization, and evaluation strategies.
arXiv Detail & Related papers (2022-02-14T15:00:22Z) - CLeaR: An Adaptive Continual Learning Framework for Regression Tasks [2.043835539102076]
Catastrophic forgetting means that a trained neural network model gradually forgets the previously learned tasks when being retrained on new tasks.
Numerous continual learning algorithms are very successful in incremental learning of classification tasks.
This article proposes a new methodological framework that can forecast targets and update itself by means of continual learning.
arXiv Detail & Related papers (2021-01-04T12:41:45Z) - Continual Learning for Natural Language Generation in Task-oriented
Dialog Systems [72.92029584113676]
Natural language generation (NLG) is an essential component of task-oriented dialog systems.
We study NLG in a "continual learning" setting to expand its knowledge to new domains or functionalities incrementally.
The major challenge towards this goal is catastrophic forgetting, meaning that a continually trained model tends to forget the knowledge it has learned before.
arXiv Detail & Related papers (2020-10-02T10:32:29Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.