Online Learning with Radial Basis Function Networks
- URL: http://arxiv.org/abs/2103.08414v1
- Date: Mon, 15 Mar 2021 14:39:40 GMT
- Title: Online Learning with Radial Basis Function Networks
- Authors: Gabriel Borrageiro, Nick Firoozye and Paolo Barucca
- Abstract summary: We consider the sequential and continual learning sub-genres of online learning.
We find that the online learning techniques outperform the offline learning ones.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate the benefits of feature selection, nonlinear modelling and
online learning with forecasting in financial time series. We consider the
sequential and continual learning sub-genres of online learning. Through
empirical experimentation, which involves long term forecasting in daily
sampled cross-asset futures, and short term forecasting in minutely sampled
cash currency pairs, we find that the online learning techniques outperform the
offline learning ones. We also find that, in the subset of models we use,
sequential learning in time with online Ridge regression, provides the best
next step ahead forecasts, and continual learning with an online radial basis
function network, provides the best multi-step ahead forecasts. We combine the
benefits of both in a precision weighted ensemble of the forecast errors and
find superior forecast performance overall.
Related papers
- AALF: Almost Always Linear Forecasting [3.336367986372977]
We argue that simple models are good enough most of the time, and forecasting performance can be improved by choosing a Deep Learning method only for certain predictions.
An empirical study on various real-world datasets shows that our selection methodology outperforms state-of-the-art online model selections methods in most cases.
arXiv Detail & Related papers (2024-09-16T10:13:09Z) - Online Distributional Regression [0.0]
Large-scale streaming data are common in modern machine learning applications.
Many fields, such as supply chain management, weather and meteorology, have pivoted towards using probabilistic forecasts.
We present a methodology for online estimation of regularized, linear distributional models.
arXiv Detail & Related papers (2024-06-26T16:04:49Z) - Online Classification with Predictions [20.291598040396302]
We study online classification when the learner has access to predictions about future examples.
We show that if the learner is always guaranteed to observe data where future examples are easily predictable, then online learning can be as easy as transductive online learning.
arXiv Detail & Related papers (2024-05-22T23:45:33Z) - Random Representations Outperform Online Continually Learned Representations [68.42776779425978]
We show that existing online continually trained deep networks produce inferior representations compared to a simple pre-defined random transforms.
Our method, called RanDumb, significantly outperforms state-of-the-art continually learned representations across all online continual learning benchmarks.
Our study reveals the significant limitations of representation learning, particularly in low-exemplar and online continual learning scenarios.
arXiv Detail & Related papers (2024-02-13T22:07:29Z) - An Adaptive Approach for Probabilistic Wind Power Forecasting Based on
Meta-Learning [7.422947032954223]
This paper studies an adaptive approach for probabilistic wind power forecasting (WPF) including offline and online learning procedures.
In the offline learning stage, a base forecast model is trained via inner and outer loop updates of meta-learning.
In the online learning stage, the base forecast model is applied to online forecasting combined with incremental learning techniques.
arXiv Detail & Related papers (2023-08-15T18:28:22Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - Near-optimal Offline Reinforcement Learning with Linear Representation:
Leveraging Variance Information with Pessimism [65.46524775457928]
offline reinforcement learning seeks to utilize offline/historical data to optimize sequential decision-making strategies.
We study the statistical limits of offline reinforcement learning with linear model representations.
arXiv Detail & Related papers (2022-03-11T09:00:12Z) - Recursive Least-Squares Estimator-Aided Online Learning for Visual
Tracking [58.14267480293575]
We propose a simple yet effective online learning approach for few-shot online adaptation without requiring offline training.
It allows an in-built memory retention mechanism for the model to remember the knowledge about the object seen before.
We evaluate our approach based on two networks in the online learning families for tracking, i.e., multi-layer perceptrons in RT-MDNet and convolutional neural networks in DiMP.
arXiv Detail & Related papers (2021-12-28T06:51:18Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z) - Online learning of windmill time series using Long Short-term Cognitive
Networks [58.675240242609064]
The amount of data generated on windmill farms makes online learning the most viable strategy to follow.
We use Long Short-term Cognitive Networks (LSTCNs) to forecast windmill time series in online settings.
Our approach reported the lowest forecasting errors with respect to a simple RNN, a Long Short-term Memory, a Gated Recurrent Unit, and a Hidden Markov Model.
arXiv Detail & Related papers (2021-07-01T13:13:24Z) - POLA: Online Time Series Prediction by Adaptive Learning Rates [4.105553918089042]
We propose POLA to automatically regulate the learning rate of recurrent neural network models to adapt to changing time series patterns across time.
POLA demonstrates overall comparable or better predictive performance over other online prediction methods.
arXiv Detail & Related papers (2021-02-17T17:56:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.