On Riemannian Approach for Constrained Optimization Model in Extreme
Classification Problems
- URL: http://arxiv.org/abs/2109.15021v1
- Date: Thu, 30 Sep 2021 11:28:35 GMT
- Title: On Riemannian Approach for Constrained Optimization Model in Extreme
Classification Problems
- Authors: Jayadev Naram, Tanmay Kumar Sinha, Pawan Kumar
- Abstract summary: A constrained optimization problem is formulated as an optimization problem on matrix manifold.
The proposed approach is tested on several real world large scale multi-label datasets.
- Score: 2.7436792484073638
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel Riemannian method for solving the Extreme multi-label
classification problem that exploits the geometric structure of the sparse
low-dimensional local embedding models. A constrained optimization problem is
formulated as an optimization problem on matrix manifold and solved using a
Riemannian optimization method. The proposed approach is tested on several real
world large scale multi-label datasets and its usefulness is demonstrated
through numerical experiments. The numerical experiments suggest that the
proposed method is fastest to train and has least model size among the
embedding-based methods. An outline of the proof of convergence for the
proposed Riemannian optimization method is also stated.
Related papers
- FORML: A Riemannian Hessian-free Method for Meta-learning on Stiefel Manifolds [4.757859522106933]
This paper introduces a Hessian-free approach that uses a first-order approximation of derivatives on the Stiefel manifold.
Our method significantly reduces the computational load and memory footprint.
arXiv Detail & Related papers (2024-02-28T10:57:30Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Multistage Stochastic Optimization via Kernels [3.7565501074323224]
We develop a non-parametric, data-driven, tractable approach for solving multistage optimization problems.
We show that the proposed method produces decision rules with near-optimal average performance.
arXiv Detail & Related papers (2023-03-11T23:19:32Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Faster Riemannian Newton-type Optimization by Subsampling and Cubic
Regularization [3.867143522757309]
This work is on constrained large-scale non-constrained optimization where the constraint set implies a manifold structure.
We propose a new second-order saddleian optimization algorithm, aiming at improving convergence and reducing computational cost.
arXiv Detail & Related papers (2023-02-22T00:37:44Z) - Riemannian Optimization for Variance Estimation in Linear Mixed Models [0.0]
We take a completely novel view on parameter estimation in linear mixed models by exploiting the intrinsic geometry of the parameter space.
Our approach yields a higher quality of the variance parameter estimates compared to existing approaches.
arXiv Detail & Related papers (2022-12-18T13:08:45Z) - The Dynamics of Riemannian Robbins-Monro Algorithms [101.29301565229265]
We propose a family of Riemannian algorithms generalizing and extending the seminal approximation framework of Robbins and Monro.
Compared to their Euclidean counterparts, Riemannian algorithms are much less understood due to lack of a global linear structure on the manifold.
We provide a general template of almost sure convergence results that mirrors and extends the existing theory for Euclidean Robbins-Monro schemes.
arXiv Detail & Related papers (2022-06-14T12:30:11Z) - First-Order Algorithms for Min-Max Optimization in Geodesic Metric
Spaces [93.35384756718868]
min-max algorithms have been analyzed in the Euclidean setting.
We prove that the extraiteient (RCEG) method corrected lastrate convergence at a linear rate.
arXiv Detail & Related papers (2022-06-04T18:53:44Z) - Automatic differentiation for Riemannian optimization on low-rank matrix
and tensor-train manifolds [71.94111815357064]
In scientific computing and machine learning applications, matrices and more general multidimensional arrays (tensors) can often be approximated with the help of low-rank decompositions.
One of the popular tools for finding the low-rank approximations is to use the Riemannian optimization.
arXiv Detail & Related papers (2021-03-27T19:56:00Z) - Learning to Guide Random Search [111.71167792453473]
We consider derivative-free optimization of a high-dimensional function that lies on a latent low-dimensional manifold.
We develop an online learning approach that learns this manifold while performing the optimization.
We empirically evaluate the method on continuous optimization benchmarks and high-dimensional continuous control problems.
arXiv Detail & Related papers (2020-04-25T19:21:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.