Adaptive Approach For Sparse Representations Using The Locally
Competitive Algorithm For Audio
- URL: http://arxiv.org/abs/2109.14705v1
- Date: Wed, 29 Sep 2021 20:26:16 GMT
- Title: Adaptive Approach For Sparse Representations Using The Locally
Competitive Algorithm For Audio
- Authors: Soufiyan Bahadi, Jean Rouat, and \'Eric Plourde
- Abstract summary: This paper presents an adaptive approach to optimize the gammachirp's parameters.
The proposed method consists of taking advantage of the LCA's neural architecture to automatically adapt the gammachirp's filterbank.
Results demonstrate an improvement in the LCA's performance with our approach in terms of sparsity, reconstruction quality, and convergence time.
- Score: 5.6394515393964575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gammachirp filterbank has been used to approximate the cochlea in sparse
coding algorithms. An oriented grid search optimization was applied to adapt
the gammachirp's parameters and improve the Matching Pursuit (MP) algorithm's
sparsity along with the reconstruction quality. However, this combination of a
greedy algorithm with a grid search at each iteration is computationally
demanding and not suitable for real-time applications. This paper presents an
adaptive approach to optimize the gammachirp's parameters but in the context of
the Locally Competitive Algorithm (LCA) that requires much fewer computations
than MP. The proposed method consists of taking advantage of the LCA's neural
architecture to automatically adapt the gammachirp's filterbank using the
backpropagation algorithm. Results demonstrate an improvement in the LCA's
performance with our approach in terms of sparsity, reconstruction quality, and
convergence time. This approach can yield a significant advantage over existing
approaches for real-time applications.
Related papers
- Optimizing Variational Quantum Circuits Using Metaheuristic Strategies in Reinforcement Learning [2.7504809152812695]
This work explores the integration of metaheuristic algorithms -- Particle Swarm Optimization, Ant Colony Optimization, Tabu Search, Genetic Algorithm, Simulated Annealing, and Harmony Search -- into Quantum Reinforcement Learning.
Evaluations in $5times5$ MiniGrid Reinforcement Learning environments show that, all algorithms yield near-optimal results.
arXiv Detail & Related papers (2024-08-02T11:14:41Z) - Performance Evaluation of Evolutionary Algorithms for Analog Integrated
Circuit Design Optimisation [0.0]
An automated sizing approach for analog circuits is presented in this paper.
A targeted search of the search space has been implemented using a particle generation function and a repair-bounds function.
The algorithms are tuned and modified to converge to a better optimal solution.
arXiv Detail & Related papers (2023-10-19T03:26:36Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Composite Optimization Algorithms for Sigmoid Networks [3.160070867400839]
We propose the composite optimization algorithms based on the linearized proximal algorithms and the alternating direction of multipliers.
Numerical experiments on Frank's function fitting show that the proposed algorithms perform satisfactorily robustly.
arXiv Detail & Related papers (2023-03-01T15:30:29Z) - Genetically Modified Wolf Optimization with Stochastic Gradient Descent
for Optimising Deep Neural Networks [0.0]
This research aims to analyze an alternative approach to optimizing neural network (NN) weights, with the use of population-based metaheuristic algorithms.
A hybrid between Grey Wolf (GWO) and Genetic Modified Algorithms (GA) is explored, in conjunction with Gradient Descent (SGD)
This algorithm allows for a combination between exploitation and exploration, whilst also tackling the issue of high-dimensionality.
arXiv Detail & Related papers (2023-01-21T13:22:09Z) - Optimistic Optimisation of Composite Objective with Exponentiated Update [2.1700203922407493]
The algorithms can be interpreted as the combination of the exponentiated gradient and $p$-norm algorithm.
They achieve a sequence-dependent regret upper bound, matching the best-known bounds for sparse target decision variables.
arXiv Detail & Related papers (2022-08-08T11:29:55Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Optimization of Graph Total Variation via Active-Set-based Combinatorial
Reconditioning [48.42916680063503]
We propose a novel adaptive preconditioning strategy for proximal algorithms on this problem class.
We show that nested-forest decomposition of the inactive edges yields a guaranteed local linear convergence rate.
Our results suggest that local convergence analysis can serve as a guideline for selecting variable metrics in proximal algorithms.
arXiv Detail & Related papers (2020-02-27T16:33:09Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.