The Hypervolume Indicator Hessian Matrix: Analytical Expression,
Computational Time Complexity, and Sparsity
- URL: http://arxiv.org/abs/2211.04171v1
- Date: Tue, 8 Nov 2022 11:24:18 GMT
- Title: The Hypervolume Indicator Hessian Matrix: Analytical Expression,
Computational Time Complexity, and Sparsity
- Authors: Andr\'e H. Deutz, Michael T.M. Emmerich, Hao Wang
- Abstract summary: This paper establishes the analytical expression of the Hessian matrix of the mapping from a (fixed size) collection of $n$ points in the $d$-dimensional decision space to the scalar hypervolume indicator value.
The Hessian matrix plays a crucial role in second-order methods, such as the Newton-Raphson optimization method, and it can be used for the verification of local optimal sets.
- Score: 4.523133864190258
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of approximating the Pareto front of a multiobjective
optimization problem can be reformulated as the problem of finding a set that
maximizes the hypervolume indicator. This paper establishes the analytical
expression of the Hessian matrix of the mapping from a (fixed size) collection
of $n$ points in the $d$-dimensional decision space (or $m$ dimensional
objective space) to the scalar hypervolume indicator value. To define the
Hessian matrix, the input set is vectorized, and the matrix is derived by
analytical differentiation of the mapping from a vectorized set to the
hypervolume indicator. The Hessian matrix plays a crucial role in second-order
methods, such as the Newton-Raphson optimization method, and it can be used for
the verification of local optimal sets. So far, the full analytical expression
was only established and analyzed for the relatively simple bi-objective case.
This paper will derive the full expression for arbitrary dimensions ($m\geq2$
objective functions). For the practically important three-dimensional case, we
also provide an asymptotically efficient algorithm with time complexity in
$O(n\log n)$ for the exact computation of the Hessian Matrix' non-zero entries.
We establish a sharp bound of $12m-6$ for the number of non-zero entries. Also,
for the general $m$-dimensional case, a compact recursive analytical expression
is established, and its algorithmic implementation is discussed. Also, for the
general case, some sparsity results can be established; these results are
implied by the recursive expression. To validate and illustrate the
analytically derived algorithms and results, we provide a few numerical
examples using Python and Mathematica implementations. Open-source
implementations of the algorithms and testing data are made available as a
supplement to this paper.
Related papers
- Riemannian Optimization for Non-convex Euclidean Distance Geometry with Global Recovery Guarantees [6.422262171968397]
Two algorithms are proposed to solve the Euclidean Distance Geometry problem.
First algorithm converges linearly to the true solution.
Second algorithm demonstrates strong numerical performance on both synthetic and real data.
arXiv Detail & Related papers (2024-10-08T21:19:22Z) - Fine-grained Analysis and Faster Algorithms for Iteratively Solving Linear Systems [9.30306458153248]
We consider the spectral tail condition number, $kappa_ell$, defined as the ratio between the $ell$th largest and the smallest singular value of the matrix representing the system.
Some of the implications of our result, and of the use of $kappa_ell$, include direct improvement over a fine-grained analysis of the Conjugate method.
arXiv Detail & Related papers (2024-05-09T14:56:49Z) - Computational-Statistical Gaps in Gaussian Single-Index Models [77.1473134227844]
Single-Index Models are high-dimensional regression problems with planted structure.
We show that computationally efficient algorithms, both within the Statistical Query (SQ) and the Low-Degree Polynomial (LDP) framework, necessarily require $Omega(dkstar/2)$ samples.
arXiv Detail & Related papers (2024-03-08T18:50:19Z) - Low-complexity subspace-descent over symmetric positive definite
manifold [9.346050098365648]
We develop low-complexity algorithms for the minimization of functions over the symmetric positive definite (SPD) manifold.
The proposed approach utilizes carefully chosen subspaces that allow the update to be written as a product of the Cholesky factor of the iterate and a sparse matrix.
arXiv Detail & Related papers (2023-05-03T11:11:46Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Approximate Vanishing Ideal Computations at Scale [21.01267270902429]
We focus on scaling up the Oracle Approximate Vanishing Ideal algorithm (OAVI)
OAVI is an attractive preprocessing technique for large-scale machine learning.
arXiv Detail & Related papers (2022-07-04T07:17:28Z) - Optimal Randomized Approximations for Matrix based Renyi's Entropy [16.651155375441796]
We develop random approximations for integer order $alpha$ cases and series approximations for non-integer $alpha$ cases.
Large-scale simulations and real-world applications validate the effectiveness of the developed approximations.
arXiv Detail & Related papers (2022-05-16T02:24:52Z) - Leverage Score Sampling for Tensor Product Matrices in Input Sparsity
Time [54.65688986250061]
We give an input sparsity time sampling algorithm for approximating the Gram matrix corresponding to the $q$-fold column-wise tensor product of $q$ matrices.
Our sampling technique relies on a collection of $q$ partially correlated random projections which can be simultaneously applied to a dataset $X$ in total time.
arXiv Detail & Related papers (2022-02-09T15:26:03Z) - Analysis of Truncated Orthogonal Iteration for Sparse Eigenvector
Problems [78.95866278697777]
We propose two variants of the Truncated Orthogonal Iteration to compute multiple leading eigenvectors with sparsity constraints simultaneously.
We then apply our algorithms to solve the sparse principle component analysis problem for a wide range of test datasets.
arXiv Detail & Related papers (2021-03-24T23:11:32Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Quantum algorithms for spectral sums [50.045011844765185]
We propose new quantum algorithms for estimating spectral sums of positive semi-definite (PSD) matrices.
We show how the algorithms and techniques used in this work can be applied to three problems in spectral graph theory.
arXiv Detail & Related papers (2020-11-12T16:29:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.