Performance Analysis of Fractional Learning Algorithms
- URL: http://arxiv.org/abs/2110.05201v1
- Date: Mon, 11 Oct 2021 12:06:44 GMT
- Title: Performance Analysis of Fractional Learning Algorithms
- Authors: Abdul Wahab, Shujaat Khan, Imran Naseem, Jong Chul Ye
- Abstract summary: It is unclear whether the proclaimed superiority over conventional algorithms is well-grounded or is a myth as their performance has never been extensively analyzed.
In this article, a rigorous analysis of fractional variants of the least mean squares and steepest descent algorithms is performed.
Their origins and consequences on the performance of the learning algorithms are discussed and swift ready-witted remedies are proposed.
- Score: 32.21539962359158
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fractional learning algorithms are trending in signal processing and adaptive
filtering recently. However, it is unclear whether the proclaimed superiority
over conventional algorithms is well-grounded or is a myth as their performance
has never been extensively analyzed. In this article, a rigorous analysis of
fractional variants of the least mean squares and steepest descent algorithms
is performed. Some critical schematic kinks in fractional learning algorithms
are identified. Their origins and consequences on the performance of the
learning algorithms are discussed and swift ready-witted remedies are proposed.
Apposite numerical experiments are conducted to discuss the convergence and
efficiency of the fractional learning algorithms in stochastic environments.
Related papers
- Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Dual Algorithmic Reasoning [9.701208207491879]
We propose to learn algorithms by exploiting duality of the underlying algorithmic problem.
We demonstrate that simultaneously learning the dual definition of these optimisation problems in algorithmic learning allows for better learning.
We then validate the real-world utility of our dual algorithmic reasoner by deploying it on a challenging brain vessel classification task.
arXiv Detail & Related papers (2023-02-09T08:46:23Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Towards Diverse Evaluation of Class Incremental Learning: A Representation Learning Perspective [67.45111837188685]
Class incremental learning (CIL) algorithms aim to continually learn new object classes from incrementally arriving data.
We experimentally analyze neural network models trained by CIL algorithms using various evaluation protocols in representation learning.
arXiv Detail & Related papers (2022-06-16T11:44:11Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Benchmarking Simulation-Based Inference [5.3898004059026325]
Recent advances in probabilistic modelling have led to a large number of simulation-based inference algorithms which do not require numerical evaluation of likelihoods.
We provide a benchmark with inference tasks and suitable performance metrics, with an initial selection of algorithms.
We found that the choice of performance metric is critical, that even state-of-the-art algorithms have substantial room for improvement, and that sequential estimation improves sample efficiency.
arXiv Detail & Related papers (2021-01-12T18:31:22Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Enhancing accuracy of deep learning algorithms by training with
low-discrepancy sequences [15.2292571922932]
We propose a deep supervised learning algorithm based on low-discrepancy sequences as the training set.
We demonstrate that the proposed algorithm significantly outperforms standard deep learning algorithms for problems in moderately high dimensions.
arXiv Detail & Related papers (2020-05-26T08:14:00Z) - Second-Order Guarantees in Centralized, Federated and Decentralized
Nonconvex Optimization [64.26238893241322]
Simple algorithms have been shown to lead to good empirical results in many contexts.
Several works have pursued rigorous analytical justification for studying non optimization problems.
A key insight in these analyses is that perturbations play a critical role in allowing local descent algorithms.
arXiv Detail & Related papers (2020-03-31T16:54:22Z) - Overall error analysis for the training of deep neural networks via
stochastic gradient descent with random initialisation [2.4874453414078896]
We provide a mathematically rigorous full error analysis of deep learning based empirical risk minimisation with quadratic loss function.
We establish, however, the first full error analysis in the scientific literature for a deep learning based algorithm in the probabilistically strong sense.
arXiv Detail & Related papers (2020-03-03T01:41:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.