The Performance Analysis of Generalized Margin Maximizer (GMM) on
Separable Data
- URL: http://arxiv.org/abs/2010.15379v1
- Date: Thu, 29 Oct 2020 06:40:05 GMT
- Title: The Performance Analysis of Generalized Margin Maximizer (GMM) on
Separable Data
- Authors: Fariborz Salehi, Ehsan Abbasi, Babak Hassibi
- Abstract summary: "Generalized Margin Maximizer", GMM, minimizes any arbitrary convex function of the parameter vector.
We provide a precise analysis of the performance of GMM via the solution of a system of nonlinear equations.
Our theoretical results are validated by extensive simulation results across a range of parameter values, problem instances, and model structures.
- Score: 45.4329219943791
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Logistic models are commonly used for binary classification tasks. The
success of such models has often been attributed to their connection to
maximum-likelihood estimators. It has been shown that gradient descent
algorithm, when applied on the logistic loss, converges to the max-margin
classifier (a.k.a. hard-margin SVM). The performance of the max-margin
classifier has been recently analyzed. Inspired by these results, in this
paper, we present and study a more general setting, where the underlying
parameters of the logistic model possess certain structures (sparse,
block-sparse, low-rank, etc.) and introduce a more general framework (which is
referred to as "Generalized Margin Maximizer", GMM). While classical max-margin
classifiers minimize the $2$-norm of the parameter vector subject to linearly
separating the data, GMM minimizes any arbitrary convex function of the
parameter vector. We provide a precise analysis of the performance of GMM via
the solution of a system of nonlinear equations. We also provide a detailed
study for three special cases: ($1$) $\ell_2$-GMM that is the max-margin
classifier, ($2$) $\ell_1$-GMM which encourages sparsity, and ($3$)
$\ell_{\infty}$-GMM which is often used when the parameter vector has binary
entries. Our theoretical results are validated by extensive simulation results
across a range of parameter values, problem instances, and model structures.
Related papers
- Global Convergence of Gradient EM for Over-Parameterized Gaussian Mixtures [53.51230405648361]
We study the dynamics of gradient EM and employ tensor decomposition to characterize the geometric landscape of the likelihood loss.<n>This is the first global convergence and recovery result for EM or gradient EM beyond the special case of $m=2$.
arXiv Detail & Related papers (2025-06-06T23:32:38Z) - A distribution-free mixed-integer optimization approach to hierarchical modelling of clustered and longitudinal data [0.0]
We introduce an innovative algorithm that evaluates cluster effects for new data points, thereby increasing the robustness and precision of this model.
The inferential and predictive efficacy of this approach is further illustrated through its application in student scoring and protein expression.
arXiv Detail & Related papers (2023-02-06T23:34:51Z) - Robust Algorithms for GMM Estimation: A Finite Sample Viewpoint [30.839245814393724]
A generic method of solving moment conditions is the Generalized Method of Moments (GMM)
We develop a GMM estimator that can tolerate a constant $ell$ recovery guarantee of $O(sqrtepsilon)$.
Our algorithm and assumptions apply to instrumental variables linear and logistic regression.
arXiv Detail & Related papers (2021-10-06T21:06:22Z) - Learning Gaussian Mixtures with Generalised Linear Models: Precise
Asymptotics in High-dimensions [79.35722941720734]
Generalised linear models for multi-class classification problems are one of the fundamental building blocks of modern machine learning tasks.
We prove exacts characterising the estimator in high-dimensions via empirical risk minimisation.
We discuss how our theory can be applied beyond the scope of synthetic data.
arXiv Detail & Related papers (2021-06-07T16:53:56Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Generalized Matrix Factorization: efficient algorithms for fitting
generalized linear latent variable models to large data arrays [62.997667081978825]
Generalized Linear Latent Variable models (GLLVMs) generalize such factor models to non-Gaussian responses.
Current algorithms for estimating model parameters in GLLVMs require intensive computation and do not scale to large datasets.
We propose a new approach for fitting GLLVMs to high-dimensional datasets, based on approximating the model using penalized quasi-likelihood.
arXiv Detail & Related papers (2020-10-06T04:28:19Z) - Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal
Sample Complexity [67.02490430380415]
We show that model-based MARL achieves a sample complexity of $tilde O(|S||B|(gamma)-3epsilon-2)$ for finding the Nash equilibrium (NE) value up to some $epsilon$ error.
We also show that such a sample bound is minimax-optimal (up to logarithmic factors) if the algorithm is reward-agnostic, where the algorithm queries state transition samples without reward knowledge.
arXiv Detail & Related papers (2020-07-15T03:25:24Z) - Consistent Structured Prediction with Max-Min Margin Markov Networks [84.60515484036239]
Max-margin methods for binary classification have been extended to the structured prediction setting under the name of max-margin Markov networks ($M3N$)
We overcome such limitations by defining the learning problem in terms of a "max-min" margin formulation, naming the resulting method max-min margin Markov networks ($M4N$)
Experiments on multi-class classification, ordinal regression, sequence prediction and ranking demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2020-07-02T10:48:42Z) - A Precise High-Dimensional Asymptotic Theory for Boosting and
Minimum-$\ell_1$-Norm Interpolated Classifiers [3.167685495996986]
This paper establishes a precise high-dimensional theory for boosting on separable data.
Under a class of statistical models, we provide an exact analysis of the universality error of boosting.
We also explicitly pin down the relation between the boosting test error and the optimal Bayes error.
arXiv Detail & Related papers (2020-02-05T00:24:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.