Learning Fast and Slow for Online Time Series Forecasting
- URL: http://arxiv.org/abs/2202.11672v1
- Date: Wed, 23 Feb 2022 18:23:07 GMT
- Title: Learning Fast and Slow for Online Time Series Forecasting
- Authors: Quang Pham, Chenghao Liu, Doyen Sahoo, Steven C.H. Hoi
- Abstract summary: Fast and Slow learning Networks (FSNet) is a holistic framework for online time-series forecasting.
FSNet balances fast adaptation to recent changes and retrieving similar old knowledge.
Our code will be made publicly available.
- Score: 76.50127663309604
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The fast adaptation capability of deep neural networks in non-stationary
environments is critical for online time series forecasting. Successful
solutions require handling changes to new and recurring patterns. However,
training deep neural forecaster on the fly is notoriously challenging because
of their limited ability to adapt to non-stationary environments and the
catastrophic forgetting of old knowledge. In this work, inspired by the
Complementary Learning Systems (CLS) theory, we propose Fast and Slow learning
Networks (FSNet), a holistic framework for online time-series forecasting to
simultaneously deal with abrupt changing and repeating patterns. Particularly,
FSNet improves the slowly-learned backbone by dynamically balancing fast
adaptation to recent changes and retrieving similar old knowledge. FSNet
achieves this mechanism via an interaction between two complementary components
of an adapter to monitor each layer's contribution to the lost, and an
associative memory to support remembering, updating, and recalling repeating
events. Extensive experiments on real and synthetic datasets validate FSNet's
efficacy and robustness to both new and recurring patterns. Our code will be
made publicly available.
Related papers
- Adaptive control of recurrent neural networks using conceptors [1.9686770963118383]
Recurrent Neural Networks excel at predicting and generating complex high-dimensional temporal patterns.
In a Machine Learning setting, the network's parameters are adapted during a training phase to match the requirements of a given task/problem.
We demonstrate how keeping parts of the network adaptive even after the training enhances its functionality and robustness.
arXiv Detail & Related papers (2024-05-12T09:58:03Z) - Liquid Neural Network-based Adaptive Learning vs. Incremental Learning for Link Load Prediction amid Concept Drift due to Network Failures [37.66676003679306]
Adapting to concept drift is a challenging task in machine learning.
In communication networks, such issue emerges when performing traffic forecasting following afailure event.
We propose an approach that exploits adaptive learning algorithms, namely, liquid neural networks, which are capable of self-adaptation to abrupt changes in data patterns without requiring any retraining.
arXiv Detail & Related papers (2024-04-08T08:47:46Z) - Rapid Network Adaptation: Learning to Adapt Neural Networks Using
Test-Time Feedback [12.946419909506883]
We create a closed-loop system that makes use of a test-time feedback signal to adapt a network on the fly.
We show that this loop can be effectively implemented using a learning-based function, which realizes an amortized for the network.
This leads to an adaptation method, named Rapid Network Adaptation (RNA), that is notably more flexible and orders of magnitude faster than the baselines.
arXiv Detail & Related papers (2023-09-27T16:20:39Z) - Recursive Least-Squares Estimator-Aided Online Learning for Visual
Tracking [58.14267480293575]
We propose a simple yet effective online learning approach for few-shot online adaptation without requiring offline training.
It allows an in-built memory retention mechanism for the model to remember the knowledge about the object seen before.
We evaluate our approach based on two networks in the online learning families for tracking, i.e., multi-layer perceptrons in RT-MDNet and convolutional neural networks in DiMP.
arXiv Detail & Related papers (2021-12-28T06:51:18Z) - Online learning of windmill time series using Long Short-term Cognitive
Networks [58.675240242609064]
The amount of data generated on windmill farms makes online learning the most viable strategy to follow.
We use Long Short-term Cognitive Networks (LSTCNs) to forecast windmill time series in online settings.
Our approach reported the lowest forecasting errors with respect to a simple RNN, a Long Short-term Memory, a Gated Recurrent Unit, and a Hidden Markov Model.
arXiv Detail & Related papers (2021-07-01T13:13:24Z) - Revisiting the double-well problem by deep learning with a hybrid
network [7.308730248177914]
We propose a novel hybrid network which integrates two different kinds of neural networks: LSTM and ResNet.
Such a hybrid network can be applied for solving cooperative dynamics in a system with fast spatial or temporal modulations.
arXiv Detail & Related papers (2021-04-25T07:51:43Z) - Phase Retrieval using Expectation Consistent Signal Recovery Algorithm
based on Hypernetwork [73.94896986868146]
Phase retrieval is an important component in modern computational imaging systems.
Recent advances in deep learning have opened up a new possibility for robust and fast PR.
We develop a novel framework for deep unfolding to overcome the existing limitations.
arXiv Detail & Related papers (2021-01-12T08:36:23Z) - Sparse Meta Networks for Sequential Adaptation and its Application to
Adaptive Language Modelling [7.859988850911321]
We introduce Sparse Meta Networks -- a meta-learning approach to learn online sequential adaptation algorithms for deep neural networks.
We augment a deep neural network with a layer-specific fast-weight memory.
We demonstrate strong performance on a variety of sequential adaptation scenarios.
arXiv Detail & Related papers (2020-09-03T17:06:52Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.