Subgroup-based Rank-1 Lattice Quasi-Monte Carlo
- URL: http://arxiv.org/abs/2011.06446v1
- Date: Thu, 29 Oct 2020 03:42:30 GMT
- Title: Subgroup-based Rank-1 Lattice Quasi-Monte Carlo
- Authors: Yueming Lyu, Yuan Yuan and Ivor W. Tsang
- Abstract summary: We propose a simple closed-form rank-1 lattice construction method based on group theory.
Our method achieves superior approximation performance on benchmark integration test problems and kernel approximation problems.
- Score: 51.877197074344586
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quasi-Monte Carlo (QMC) is an essential tool for integral approximation,
Bayesian inference, and sampling for simulation in science, etc. In the QMC
area, the rank-1 lattice is important due to its simple operation, and nice
properties for point set construction. However, the construction of the
generating vector of the rank-1 lattice is usually time-consuming because of an
exhaustive computer search. To address this issue, we propose a simple
closed-form rank-1 lattice construction method based on group theory. Our
method reduces the number of distinct pairwise distance values to generate a
more regular lattice. We theoretically prove a lower and an upper bound of the
minimum pairwise distance of any non-degenerate rank-1 lattice. Empirically,
our methods can generate a near-optimal rank-1 lattice compared with the
Korobov exhaustive search regarding the $l_1$-norm and $l_2$-norm minimum
distance. Moreover, experimental results show that our method achieves superior
approximation performance on benchmark integration test problems and kernel
approximation problems.
Related papers
- High-accuracy sampling from constrained spaces with the Metropolis-adjusted Preconditioned Langevin Algorithm [12.405427902037971]
We propose a first-order sampling method for approximate sampling from a target distribution whose support is a proper convex subset of $mathbbRd$.
Our proposed method is the result of applying a Metropolis-Hastings filter to the Markov chain formed by a single step of the preconditioned Langevin algorithm.
arXiv Detail & Related papers (2024-12-24T23:21:23Z) - Contractivity and linear convergence in bilinear saddle-point problems: An operator-theoretic approach [4.895675355301809]
We study the convex-concave bilinear saddle-point problem $min_x max_y f(x) + ytop Ax - g(y)$.
By employing tools from monotone operator theory, we prove the contractivity of several first-order primal-dual algorithms.
arXiv Detail & Related papers (2024-10-18T16:43:10Z) - Riemannian Optimization for Non-convex Euclidean Distance Geometry with Global Recovery Guarantees [6.422262171968397]
Two algorithms are proposed to solve the Euclidean Distance Geometry problem.
First algorithm converges linearly to the true solution.
Second algorithm demonstrates strong numerical performance on both synthetic and real data.
arXiv Detail & Related papers (2024-10-08T21:19:22Z) - Convex optimization over a probability simplex [0.0]
We propose a new iteration scheme, the Cauchy-Simplex, to optimize convex problems over the probability simplex.
Each iteration of the Cauchy-Simplex consists of simple operations, making it well-suited for high-dimensional problems.
arXiv Detail & Related papers (2023-05-15T22:14:22Z) - A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization [53.044526424637866]
In this paper we consider finding an approximate second-order stationary point (SOSP) that minimizes a twice different subject general non conic optimization.
In particular, we propose a Newton-CG based-augmentedconjugate method for finding an approximate SOSP.
arXiv Detail & Related papers (2023-01-10T20:43:29Z) - Min-Max Optimization Made Simple: Approximating the Proximal Point
Method via Contraction Maps [77.8999425439444]
We present a first-order method that admits near-optimal convergence rates for convex/concave min-max problems.
Our work is based on the fact that the update rule of the Proximal Point method can be approximated up to accuracy.
arXiv Detail & Related papers (2023-01-10T12:18:47Z) - Leveraging Non-uniformity in First-order Non-convex Optimization [93.6817946818977]
Non-uniform refinement of objective functions leads to emphNon-uniform Smoothness (NS) and emphNon-uniform Lojasiewicz inequality (NL)
New definitions inspire new geometry-aware first-order methods that converge to global optimality faster than the classical $Omega (1/t2)$ lower bounds.
arXiv Detail & Related papers (2021-05-13T04:23:07Z) - An Empirical Process Approach to the Union Bound: Practical Algorithms
for Combinatorial and Linear Bandits [34.06611065493047]
This paper proposes near-optimal algorithms for the pure-exploration linear bandit problem in the fixed confidence and fixed budget settings.
We provide an algorithm whose sample complexity scales with the geometry of the instance and avoids an explicit union bound over the number of arms.
We also propose the first algorithm for linear bandits in the the fixed budget setting.
arXiv Detail & Related papers (2020-06-21T00:56:33Z) - Explicit Regularization of Stochastic Gradient Methods through Duality [9.131027490864938]
We propose randomized Dykstra-style algorithms based on randomized dual coordinate ascent.
For accelerated coordinate descent, we obtain a new algorithm that has better convergence properties than existing gradient methods in the interpolating regime.
arXiv Detail & Related papers (2020-03-30T20:44:56Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z) - Clustering Binary Data by Application of Combinatorial Optimization
Heuristics [52.77024349608834]
We study clustering methods for binary data, first defining aggregation criteria that measure the compactness of clusters.
Five new and original methods are introduced, using neighborhoods and population behavior optimization metaheuristics.
From a set of 16 data tables generated by a quasi-Monte Carlo experiment, a comparison is performed for one of the aggregations using L1 dissimilarity, with hierarchical clustering, and a version of k-means: partitioning around medoids or PAM.
arXiv Detail & Related papers (2020-01-06T23:33:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.