2nd-order Updates with 1st-order Complexity
- URL: http://arxiv.org/abs/2105.11439v1
- Date: Mon, 24 May 2021 17:47:51 GMT
- Title: 2nd-order Updates with 1st-order Complexity
- Authors: Michael F. Zimmer
- Abstract summary: It has long been a goal to efficiently compute and use second order information on a function ($f$) to assist in numerical approximations.
Here it is shown how, using only basic physics and a numerical approximation, such information can be accurately obtained at a cost of $cal O(N)$ complexity.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It has long been a goal to efficiently compute and use second order
information on a function ($f$) to assist in numerical approximations. Here it
is shown how, using only basic physics and a numerical approximation, such
information can be accurately obtained at a cost of ${\cal O}(N)$ complexity,
where $N$ is the dimensionality of the parameter space of $f$. In this paper,
an algorithm ({\em VA-Flow}) is developed to exploit this second order
information, and pseudocode is presented. It is applied to two classes of
problems, that of inverse kinematics (IK) and gradient descent (GD). In the IK
application, the algorithm is fast and robust, and is shown to lead to smooth
behavior even near singularities. For GD the algorithm also works very well,
provided the cost function is locally well-described by a polynomial.
Related papers
- Obtaining Lower Query Complexities through Lightweight Zeroth-Order Proximal Gradient Algorithms [65.42376001308064]
We propose two variance reduced ZO estimators for complex gradient problems.
We improve the state-of-the-art function complexities from $mathcalOleft(minfracdn1/2epsilon2, fracdepsilon3right)$ to $tildecalOleft(fracdepsilon2right)$.
arXiv Detail & Related papers (2024-10-03T15:04:01Z) - Quantum speedups for linear programming via interior point methods [1.8434042562191815]
We describe a quantum algorithm for solving a linear program with $n$ inequality constraints on $d$ variables.
Our algorithm speeds up the Newton step in the state-of-the-art interior point method of Lee and Sidford.
arXiv Detail & Related papers (2023-11-06T16:00:07Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Representing Additive Gaussian Processes by Sparse Matrices [18.618437338490487]
Mat'ern Gaussian Processes (GPs) are one of the most popular scalable high-dimensional problems.
Back-fitting algorithms can reduce the time complexity of computing the posterior mean from $O(n3)$ to $O(nlog n)$ time.
Generalizing these algorithms to efficiently compute the posterior variance and maximum log-likelihood remains an open problem.
arXiv Detail & Related papers (2023-04-29T18:53:42Z) - Efficient Matrix-Free Approximations of Second-Order Information, with
Applications to Pruning and Optimization [16.96639526117016]
We investigate matrix-free, linear-time approaches for estimating Inverse-Hessian Vector Products (IHVPs)
These algorithms yield state-of-the-art results for network pruning and optimization with lower computational overhead relative to existing second-order methods.
arXiv Detail & Related papers (2021-07-07T17:01:34Z) - Bayesian Optimistic Optimisation with Exponentially Decaying Regret [58.02542541410322]
The current practical BO algorithms have regret bounds ranging from $mathcalO(fraclogNsqrtN)$ to $mathcal O(e-sqrtN)$, where $N$ is the number of evaluations.
This paper explores the possibility of improving the regret bound in the noiseless setting by intertwining concepts from BO and tree-based optimistic optimisation.
We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order $mathcal O(N-sqrt
arXiv Detail & Related papers (2021-05-10T13:07:44Z) - Towards an efficient approach for the nonconvex $\ell_p$ ball
projection: algorithm and analysis [3.4693390973571594]
This paper primarily focuses on computing the Euclidean projection of a vector onto the $ell_p$ ball in which $pin(0,1)$.
We develop a novel numerical approach for computing the stationary point through solving a sequence of projections onto the reweighted $ell_1$-balls.
The proposed algorithm is shown to converge uniquely under mild conditions and has a worst-case $O(1/sqrtk)$ convergence rate.
arXiv Detail & Related papers (2021-01-05T04:51:12Z) - A One-bit, Comparison-Based Gradient Estimator [29.600975900977343]
We show how one can use tools from one-bit compressed sensing to construct a robust and reliable estimator of the normalized gradient.
We propose an algorithm, coined SCOBO, that uses this estimator within a gradient descent scheme.
arXiv Detail & Related papers (2020-10-06T05:01:38Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Second-order Conditional Gradient Sliding [79.66739383117232]
We present the emphSecond-Order Conditional Gradient Sliding (SOCGS) algorithm.
The SOCGS algorithm converges quadratically in primal gap after a finite number of linearly convergent iterations.
It is useful when the feasible region can only be accessed efficiently through a linear optimization oracle.
arXiv Detail & Related papers (2020-02-20T17:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.