Soft Quantization using Entropic Regularization
- URL: http://arxiv.org/abs/2309.04428v1
- Date: Fri, 8 Sep 2023 16:41:26 GMT
- Title: Soft Quantization using Entropic Regularization
- Authors: Rajmadan Lakshmanan and Alois Pichler
- Abstract summary: We investigate the properties and robustness of the entropy-regularized quantization problem.
The proposed approximation technique naturally adopts the softmin function.
We implement a gradient approach to achieve the optimal solutions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The quantization problem aims to find the best possible approximation of
probability measures on ${\mathbb{R}}^d$ using finite, discrete measures. The
Wasserstein distance is a typical choice to measure the quality of the
approximation. This contribution investigates the properties and robustness of
the entropy-regularized quantization problem, which relaxes the standard
quantization problem. The proposed approximation technique naturally adopts the
softmin function, which is well known for its robustness in terms of
theoretical and practicability standpoints. Moreover, we use the
entropy-regularized Wasserstein distance to evaluate the quality of the soft
quantization problem's approximation, and we implement a stochastic gradient
approach to achieve the optimal solutions. The control parameter in our
proposed method allows for the adjustment of the optimization problem's
difficulty level, providing significant advantages when dealing with
exceptionally challenging problems of interest. As well, this contribution
empirically illustrates the performance of the method in various expositions.
Related papers
- Non-convex entropic mean-field optimization via Best Response flow [0.0]
We discuss the problem of minimizing non- functionals on the space probability measures, regularized by the relative entropy (KL) with respect to a fixed reference measure.<n>We show how to choose the regularizer, given the non functional, so that the Best Response becomes a contraction with respect to the $L1$Wasserstein distance.
arXiv Detail & Related papers (2025-05-28T18:22:08Z) - Policy Testing in Markov Decision Processes [48.642181362172906]
We study the policy testing problem in discounted decision processes (MDP) under the fixed-confidence setting.<n>The goal is to determine whether the value of a given policy exceeds a numerical threshold.
arXiv Detail & Related papers (2025-05-21T10:13:54Z) - A New Stochastic Approximation Method for Gradient-based Simulated Parameter Estimation [0.7673339435080445]
We introduce a gradient-based simulated parameter estimation framework, which employs a multi-time scale approximation algorithm.
This approach effectively addresses the ratio bias that arises in both maximum likelihood estimation and posterior density estimation problems.
Our work extends the GSPE framework to handle complex models such as Markov models and variational inference-based problems.
arXiv Detail & Related papers (2025-03-24T03:54:50Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Optimizing Chance-Constrained Submodular Problems with Variable
Uncertainties [12.095075636344536]
We study chance-constrained submodular optimization problems, which capture a wide range of problems with constraints.
We present greedy algorithms that can obtain a high-quality solution, i.e., a constant approximation ratio to the given optimal solution.
arXiv Detail & Related papers (2023-09-23T04:48:49Z) - Learning to Optimize with Stochastic Dominance Constraints [103.26714928625582]
In this paper, we develop a simple yet efficient approach for the problem of comparing uncertain quantities.
We recast inner optimization in the Lagrangian as a learning problem for surrogate approximation, which bypasses apparent intractability.
The proposed light-SD demonstrates superior performance on several representative problems ranging from finance to supply chain management.
arXiv Detail & Related papers (2022-11-14T21:54:31Z) - Quantization-Based Optimization: Alternative Stochastic Approximation of
Global Optimization [0.0]
We propose a global optimization algorithm based on quantizing the energy level of an objective function in an NP-hard problem.
Numerical experiments show that the proposed algorithm outperforms conventional learning methods in solving NP-hard optimization problems.
arXiv Detail & Related papers (2022-11-08T03:01:45Z) - Problem-Size Independent Angles for a Grover-Driven Quantum Approximate
Optimization Algorithm [0.0]
We show that the calculation of the expectation of a problem Hamiltonian under a Grover-driven, QAOA-prepared state can be performed independently of system size.
Such calculations can help deliver insights into the performance of and predictability of angles in QAOA in the limit of large problem sizes.
arXiv Detail & Related papers (2022-08-22T17:18:25Z) - Quantum constraint learning for quantum approximate optimization
algorithm [0.0]
This paper introduces a quantum machine learning approach to learn the mixer Hamiltonian required to hard constrain the search subspace.
One can directly plug the learnt unitary into the QAOA framework using an adaptable ansatz.
We also develop an intuitive metric that uses Wasserstein distance to assess the performance of general approximate optimization algorithms with/without constraints.
arXiv Detail & Related papers (2021-05-14T11:31:14Z) - Certificates of quantum many-body properties assisted by machine
learning [0.0]
We propose a novel approach combining the power of relaxation techniques with deep reinforcement learning.
We illustrate the viability of the method in the context of finding the ground state energy of many transfer systems.
We provide tools to generalize the approach to other common applications in the field of quantum information processing.
arXiv Detail & Related papers (2021-03-05T17:47:26Z) - The Variational Method of Moments [65.91730154730905]
conditional moment problem is a powerful formulation for describing structural causal parameters in terms of observables.
Motivated by a variational minimax reformulation of OWGMM, we define a very general class of estimators for the conditional moment problem.
We provide algorithms for valid statistical inference based on the same kind of variational reformulations.
arXiv Detail & Related papers (2020-12-17T07:21:06Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Robust Reinforcement Learning with Wasserstein Constraint [49.86490922809473]
We show the existence of optimal robust policies, provide a sensitivity analysis for the perturbations, and then design a novel robust learning algorithm.
The effectiveness of the proposed algorithm is verified in the Cart-Pole environment.
arXiv Detail & Related papers (2020-06-01T13:48:59Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.