A Unified Optimization Framework for Multiclass Classification with Structured Hyperplane Arrangements
- URL: http://arxiv.org/abs/2510.05047v1
- Date: Mon, 06 Oct 2025 17:26:56 GMT
- Title: A Unified Optimization Framework for Multiclass Classification with Structured Hyperplane Arrangements
- Authors: VĂctor Blanco, Harshit Kothari, James Luedtke,
- Abstract summary: We propose a new mathematical optimization model for multiclass classification based on arrangements of hyperplanes.<n>Our approach preserves the core support vector machine (SVM) paradigm of maximizing class separation while minimizing misclassification errors.<n>We present a kernel-based extension that allows it to construct nonlinear decision boundaries.
- Score: 0.8156494881838946
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a new mathematical optimization model for multiclass classification based on arrangements of hyperplanes. Our approach preserves the core support vector machine (SVM) paradigm of maximizing class separation while minimizing misclassification errors, and it is computationally more efficient than a previous formulation. We present a kernel-based extension that allows it to construct nonlinear decision boundaries. Furthermore, we show how the framework can naturally incorporate alternative geometric structures, including classification trees, $\ell_p$-SVMs, and models with discrete feature selection. To address large-scale instances, we develop a dynamic clustering matheuristic that leverages the proposed MIP formulation. Extensive computational experiments demonstrate the efficiency of the proposed model and dynamic clustering heuristic, and we report competitive classification performance on both synthetic datasets and real-world benchmarks from the UCI Machine Learning Repository, comparing our method with state-of-the-art implementations available in scikit-learn.
Related papers
- Generalized Optimal Classification Trees: A Mixed-Integer Programming Approach [17.725629133949955]
Mixed-integer programming (MIP) offers a high degree of modeling flexibility.<n>We propose a MIP-based framework for learning optimal classification trees under nonlinear performance metrics.<n>We evaluate the proposed approach on 50 benchmark datasets.
arXiv Detail & Related papers (2026-02-02T14:46:01Z) - Deep Matrix Factorization with Adaptive Weights for Multi-View Clustering [0.6037276428689637]
We introduce a novel Deep Matrix Factorization with Adaptive Weights for Multi-View Clustering (DMFAW)<n>Our method simultaneously incorporates feature selection and generates local partitions, enhancing clustering results.<n>Experiments on benchmark datasets highlight that DMFAW outperforms state-of-the-art methods in terms of clustering performance.
arXiv Detail & Related papers (2024-12-03T09:08:27Z) - Synergistic eigenanalysis of covariance and Hessian matrices for enhanced binary classification [72.77513633290056]
We present a novel approach that combines the eigenanalysis of a covariance matrix evaluated on a training set with a Hessian matrix evaluated on a deep learning model.
Our method captures intricate patterns and relationships, enhancing classification performance.
arXiv Detail & Related papers (2024-02-14T16:10:42Z) - Lp-Norm Constrained One-Class Classifier Combination [18.27510863075184]
We consider the one-class classification problem by modelling the sparsity/uniformity of the ensemble.
We present an effective approach to solve formulated convex constrained problem efficiently.
arXiv Detail & Related papers (2023-12-25T16:32:34Z) - A Robust Twin Parametric Margin Support Vector Machine for Multiclass Classification [0.0]
We introduce novel Twin Parametric Margin Support Vector Machine (TPMSVM) models designed to address multiclass classification tasks under feature uncertainty.<n>To handle data perturbations, we construct bounded-by-norm uncertainty set around each training observation and derive the robust counterparts of the deterministic models.<n>We validate the effectiveness of the proposed robust multiclass TPMSVM methodology on real-world datasets.
arXiv Detail & Related papers (2023-06-09T19:27:24Z) - Scaling Pre-trained Language Models to Deeper via Parameter-efficient
Architecture [68.13678918660872]
We design a more capable parameter-sharing architecture based on matrix product operator (MPO)
MPO decomposition can reorganize and factorize the information of a parameter matrix into two parts.
Our architecture shares the central tensor across all layers for reducing the model size.
arXiv Detail & Related papers (2023-03-27T02:34:09Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Late Fusion Multi-view Clustering via Global and Local Alignment
Maximization [61.89218392703043]
Multi-view clustering (MVC) optimally integrates complementary information from different views to improve clustering performance.
Most of existing approaches directly fuse multiple pre-specified similarities to learn an optimal similarity matrix for clustering.
We propose late fusion MVC via alignment to address these issues.
arXiv Detail & Related papers (2022-08-02T01:49:31Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Multiclass Optimal Classification Trees with SVM-splits [1.5039745292757671]
We present a novel mathematical optimization-based methodology to construct tree-shaped classification rules for multiclass instances.
Our approach consists of building Classification Trees in which, except for the leaf nodes, the labels are temporarily left out and grouped into two classes by means of a SVM separating hyperplane.
arXiv Detail & Related papers (2021-11-16T18:15:56Z) - Optimization-Inspired Learning with Architecture Augmentations and
Control Mechanisms for Low-Level Vision [74.9260745577362]
This paper proposes a unified optimization-inspired learning framework to aggregate Generative, Discriminative, and Corrective (GDC) principles.
We construct three propagative modules to effectively solve the optimization models with flexible combinations.
Experiments across varied low-level vision tasks validate the efficacy and adaptability of GDC.
arXiv Detail & Related papers (2020-12-10T03:24:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.