Optimal and Near-Optimal Adaptive Vector Quantization
- URL: http://arxiv.org/abs/2402.03158v1
- Date: Mon, 5 Feb 2024 16:27:59 GMT
- Title: Optimal and Near-Optimal Adaptive Vector Quantization
- Authors: Ran Ben-Basat, Yaniv Ben-Itzhak, Michael Mitzenmacher, Shay Vargaftik
- Abstract summary: Quantization is a fundamental optimization for many machine-learning use cases, including compressing, model weights and activations, and datasets.
We revisit the Adaptive Vector Quantization (AVQ) problem and present algorithms that find optimal solutions with improved time and space complexity.
Our experiments show our algorithms may open the door to using AVQ more extensively in a variety of machine learning applications.
- Score: 16.628351691108687
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantization is a fundamental optimization for many machine-learning use
cases, including compressing gradients, model weights and activations, and
datasets. The most accurate form of quantization is \emph{adaptive}, where the
error is minimized with respect to a given input, rather than optimizing for
the worst case. However, optimal adaptive quantization methods are considered
infeasible in terms of both their runtime and memory requirements.
We revisit the Adaptive Vector Quantization (AVQ) problem and present
algorithms that find optimal solutions with asymptotically improved time and
space complexity. We also present an even faster near-optimal algorithm for
large inputs. Our experiments show our algorithms may open the door to using
AVQ more extensively in a variety of machine learning applications.
Related papers
- Accelerated Gradient Algorithms with Adaptive Subspace Search for
Instance-Faster Optimization [6.896308995955336]
Gradient-based minimax optimal algorithms have promoted the development of continuous optimization and machine learning.
In this paper, we open up a new way to design and analyze gradient-based algorithms with direct applications in machine learning.
arXiv Detail & Related papers (2023-12-06T01:16:10Z) - Accelerating genetic optimization of nonlinear model predictive control
by learning optimal search space size [0.8057006406834467]
This paper proposes an approach to accelerate the optimization of NMPC by learning optimal space size.
The proposed approach was compared on two nonlinear systems and compared with two other-based NMPC approaches.
arXiv Detail & Related papers (2023-05-14T08:10:49Z) - BiAdam: Fast Adaptive Bilevel Optimization Methods [104.96004056928474]
Bilevel optimization has attracted increased interest in machine learning due to its many applications.
We provide a useful analysis framework for both the constrained and unconstrained optimization.
arXiv Detail & Related papers (2021-06-21T20:16:40Z) - SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients [99.13839450032408]
It is desired to design a universal framework for adaptive algorithms to solve general problems.
In particular, our novel framework provides adaptive methods under non convergence support for setting.
arXiv Detail & Related papers (2021-06-15T15:16:28Z) - Optimizing Optimizers: Regret-optimal gradient descent algorithms [9.89901717499058]
We study the existence, uniqueness and consistency of regret-optimal algorithms.
By providing first-order optimality conditions for the control problem, we show that regret-optimal algorithms must satisfy a specific structure in their dynamics.
We present fast numerical methods for approximating them, generating optimization algorithms which directly optimize their long-term regret.
arXiv Detail & Related papers (2020-12-31T19:13:53Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Improving the Quantum Approximate Optimization Algorithm with
postselection [0.0]
Combinatorial optimization is among the main applications envisioned for near-term and fault-tolerant quantum computers.
We consider a well-studied quantum algorithm for optimization: the Quantum Approximate Optimization Algorithm (QAOA) applied to the MaxCut problem on 3-regular graphs.
We derive theoretical upper and lower bounds showing that a constant (though small) increase of the fraction of satisfied edges is indeed achievable.
arXiv Detail & Related papers (2020-11-10T22:17:50Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.