Fast $L^2$ optimal mass transport via reduced basis methods for the
Monge-Amp$\grave{\rm e}$re equation
- URL: http://arxiv.org/abs/2112.01878v1
- Date: Fri, 3 Dec 2021 12:30:46 GMT
- Title: Fast $L^2$ optimal mass transport via reduced basis methods for the
Monge-Amp$\grave{\rm e}$re equation
- Authors: Shijin Hou, Yanlai Chen, Yinhua Xia
- Abstract summary: We propose a machine learning-like method for solving the parameterized Monge-Amp$graverm e$re equation.
Several challenging numerical tests demonstrate the accuracy and high efficiency of our method for solving the Monge-Amp$graverm e$re equation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Repeatedly solving the parameterized optimal mass transport (pOMT) problem is
a frequent task in applications such as image registration and adaptive grid
generation. It is thus critical to develop a highly efficient reduced solver
that is equally accurate as the full order model. In this paper, we propose
such a machine learning-like method for pOMT by adapting a new reduced basis
(RB) technique specifically designed for nonlinear equations, the reduced
residual reduced over-collocation (R2-ROC) approach, to the parameterized
Monge-Amp$\grave{\rm e}$re equation. It builds on top of a narrow-stencil
finite different method (FDM), a so-called truth solver, which we propose in
this paper for the Monge-Amp$\grave{\rm e}$re equation with a transport
boundary. Together with the R2-ROC approach, it allows us to handle the strong
and unique nonlinearity pertaining to the Monge-Amp$\grave{\rm e}$re equation
achieving online efficiency without resorting to any direct approximation of
the nonlinearity. Several challenging numerical tests demonstrate the accuracy
and high efficiency of our method for solving the Monge-Amp$\grave{\rm e}$re
equation with various parametric boundary conditions.
Related papers
- Pushing the Limits of Large Language Model Quantization via the Linearity Theorem [71.3332971315821]
We present a "line theoremarity" establishing a direct relationship between the layer-wise $ell$ reconstruction error and the model perplexity increase due to quantization.
This insight enables two novel applications: (1) a simple data-free LLM quantization method using Hadamard rotations and MSE-optimal grids, dubbed HIGGS, and (2) an optimal solution to the problem of finding non-uniform per-layer quantization levels.
arXiv Detail & Related papers (2024-11-26T15:35:44Z) - Alternating Minimization Schemes for Computing Rate-Distortion-Perception Functions with $f$-Divergence Perception Constraints [10.564071872770146]
We study the computation of the rate-distortion-perception function (RDPF) for discrete memoryless sources.
We characterize the optimal parametric solutions.
We provide sufficient conditions on the distortion and the perception constraints.
arXiv Detail & Related papers (2024-08-27T12:50:12Z) - Oracle Complexity Reduction for Model-free LQR: A Stochastic
Variance-Reduced Policy Gradient Approach [4.422315636150272]
We investigate the problem of learning an $epsilon$-approximate solution for the discrete-time Linear Quadratic Regulator (LQR) problem.
Our method combines both one-point and two-point estimations in a dual-loop variance-reduced algorithm.
arXiv Detail & Related papers (2023-09-19T15:03:18Z) - Globally Convergent Accelerated Algorithms for Multilinear Sparse
Logistic Regression with $\ell_0$-constraints [2.323238724742687]
Multilinear logistic regression serves as a powerful tool for the analysis of multidimensional data.
We propose an Accelerated Proximal Alternating Minim-MLSR model to solve the $ell_0$-MLSR.
We also demonstrate that APALM$+$ is globally convergent to a first-order critical point as well as to establish convergence by using the Kurdy-Lojasiewicz property.
arXiv Detail & Related papers (2023-09-17T11:05:08Z) - Provable and Practical: Efficient Exploration in Reinforcement Learning via Langevin Monte Carlo [104.9535542833054]
We present a scalable and effective exploration strategy based on Thompson sampling for reinforcement learning (RL)
We instead directly sample the Q function from its posterior distribution, by using Langevin Monte Carlo.
Our approach achieves better or similar results compared with state-of-the-art deep RL algorithms on several challenging exploration tasks from the Atari57 suite.
arXiv Detail & Related papers (2023-05-29T17:11:28Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Breaking the Sample Complexity Barrier to Regret-Optimal Model-Free
Reinforcement Learning [52.76230802067506]
A novel model-free algorithm is proposed to minimize regret in episodic reinforcement learning.
The proposed algorithm employs an em early-settled reference update rule, with the aid of two Q-learning sequences.
The design principle of our early-settled variance reduction method might be of independent interest to other RL settings.
arXiv Detail & Related papers (2021-10-09T21:13:48Z) - Using Multilevel Circulant Matrix Approximate to Speed Up Kernel
Logistic Regression [3.1427994341585688]
We employ the multilevel circulant matrix (MCM) approximate kernel matrix to save in storage space and accelerate the solution of the KLR.
Our method makes KLR scalable for large-scale problems, with less memory consumption, and converges to test accuracy without sacrifice in a shorter time.
arXiv Detail & Related papers (2021-08-19T10:30:12Z) - A Wasserstein Minimax Framework for Mixed Linear Regression [69.40394595795544]
Multi-modal distributions are commonly used to model clustered data in learning tasks.
We propose an optimal transport-based framework for Mixed Linear Regression problems.
arXiv Detail & Related papers (2021-06-14T16:03:51Z) - An Online Method for A Class of Distributionally Robust Optimization
with Non-Convex Objectives [54.29001037565384]
We propose a practical online method for solving a class of online distributionally robust optimization (DRO) problems.
Our studies demonstrate important applications in machine learning for improving the robustness of networks.
arXiv Detail & Related papers (2020-06-17T20:19:25Z) - A conditional one-output likelihood formulation for multitask Gaussian
processes [0.0]
Multitask Gaussian processes (MTGP) are the Gaussian process framework's solution for multioutput regression problems.
Here we introduce a novel approach that simplifies the multitask learning.
We show that it is computationally competitive with state of the art options.
arXiv Detail & Related papers (2020-06-05T14:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.