Heaviside Low-Rank Support Matrix Machine
- URL: http://arxiv.org/abs/2603.00491v1
- Date: Sat, 28 Feb 2026 06:08:16 GMT
- Title: Heaviside Low-Rank Support Matrix Machine
- Authors: Xianchao Xiu, Shenghao Sun, Xinrong Li, Jiyuan Tao,
- Abstract summary: We propose a novel Heaviside low-rank SMM model called convex-SMM.<n>In theory, we analyze the Karush-Kuhnucker points rigorously prove the sufficient and necessary conditions.
- Score: 3.386541256893677
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Support matrix machine (SMM) is an emerging classification framework that directly handles matrix-structured observations, thereby avoiding the spatial correlations destroyed by vectorization. However, most existing SMM variants rely on convex or nonconvex surrogate loss functions, which may lead to high sensitivity to noise. To address this issue, we propose a novel Heaviside low-rank SMM model called HL-SMM, which leverages the Heaviside loss instead of the common hinge or ramp losses for robustness. Moreover, the low-rank constraint is adopted to accurately characterize the inherent global structure. In theory, we analyze the Karush-Kuhn-Tucker (KKT) points and rigorously prove the sufficient and necessary conditions. In algorithms, we develop an effective proximal alternating minimization (PAM) scheme, where all subproblems have closed-form solutions. Extensive experiments on benchmark datasets validate that the proposed HL-SMM achieves superior classification accuracy and robustness compared to state-of-the-art methods.
Related papers
- A Comparative Study of MAP and LMMSE Estimators for Blind Inverse Problems [0.17188280334580194]
We show how two synthetic MAP approaches can be used to reduce the inherent non-dimensionality problem.<n>We also show that the LMMSE estimator can serve as an alternative that can circumvent the limitations.
arXiv Detail & Related papers (2026-02-12T10:49:45Z) - Majorization-Minimization Networks for Inverse Problems: An Application to EEG Imaging [4.063392865490957]
Inverse problems are often ill-posed and require optimization schemes with strong stability and convergence guarantees.<n>We propose a learned Majorization-Minimization (MM) framework for inverse problems within a bilevel optimization setting.<n>We learn a structured curvature majorant that governs each MM step while preserving classical MM descent guarantees.
arXiv Detail & Related papers (2026-01-23T10:33:45Z) - The Hidden Cost of Approximation in Online Mirror Descent [56.99972253009168]
Online mirror descent (OMD) is a fundamental algorithmic paradigm that underlies many algorithms in optimization, machine learning and sequential decision-making.<n>In this work we initiate a systematic study into inexact OMD, and uncover an intricate relation between regularizer smoothness and robustness to approximation errors.
arXiv Detail & Related papers (2025-11-27T10:09:07Z) - Efficient Approximation of Volterra Series for High-Dimensional Systems [0.0]
We introduce the Head Averaging (THA) algorithm, which significantly reduces complexity by constructing localized MVMALS models trained on small subsets of the input space.<n>THA offers a scalable and theoretically grounded approach for identifying previously intractable high-dimensional systems.
arXiv Detail & Related papers (2025-11-09T20:31:39Z) - NDCG-Consistent Softmax Approximation with Accelerated Convergence [67.10365329542365]
We propose novel loss formulations that align directly with ranking metrics.<n>We integrate the proposed RG losses with the highly efficient Alternating Least Squares (ALS) optimization method.<n> Empirical evaluations on real-world datasets demonstrate that our approach achieves comparable or superior ranking performance.
arXiv Detail & Related papers (2025-06-11T06:59:17Z) - Tailed Low-Rank Matrix Factorization for Similarity Matrix Completion [14.542166904874147]
Similarity Completion Matrix serves as a fundamental tool at the core of numerous machinelearning tasks.
To address this issue, Similarity Matrix Theoretical (SMC) methods have been proposed, but they suffer complexity.
We present two novel, scalable, and effective algorithms, which investigate the PSD property to guide the estimation process and incorporate non low-rank regularizer to ensure the low-rank solution.
arXiv Detail & Related papers (2024-09-29T04:27:23Z) - Spectral Entry-wise Matrix Estimation for Low-Rank Reinforcement
Learning [53.445068584013896]
We study matrix estimation problems arising in reinforcement learning (RL) with low-rank structure.
In low-rank bandits, the matrix to be recovered specifies the expected arm rewards, and for low-rank Markov Decision Processes (MDPs), it may for example characterize the transition kernel of the MDP.
We show that simple spectral-based matrix estimation approaches efficiently recover the singular subspaces of the matrix and exhibit nearly-minimal entry-wise error.
arXiv Detail & Related papers (2023-10-10T17:06:41Z) - Global Convergence of Sub-gradient Method for Robust Matrix Recovery:
Small Initialization, Noisy Measurements, and Over-parameterization [4.7464518249313805]
Sub-gradient method (SubGM) is used to recover a low-rank matrix from a limited number of measurements.
We show that SubGM converges to the true solution, even under arbitrarily large and arbitrarily dense noise values.
arXiv Detail & Related papers (2022-02-17T17:50:04Z) - Exact Decomposition of Joint Low Rankness and Local Smoothness Plus
Sparse Matrices [39.47324019377441]
We propose a new RPCA model based on three-dimensional correlated total variation regularization (3DCTV-RPCA for short)
We prove that under some mild assumptions, the proposed 3DCTV-RPCA model can decompose both components exactly.
arXiv Detail & Related papers (2022-01-29T13:58:03Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Robust Compressed Sensing using Generative Models [98.64228459705859]
In this paper we propose an algorithm inspired by the Median-of-Means (MOM)
Our algorithm guarantees recovery for heavy-tailed data, even in the presence of outliers.
arXiv Detail & Related papers (2020-06-16T19:07:41Z) - Modal Regression based Structured Low-rank Matrix Recovery for
Multi-view Learning [70.57193072829288]
Low-rank Multi-view Subspace Learning has shown great potential in cross-view classification in recent years.
Existing LMvSL based methods are incapable of well handling view discrepancy and discriminancy simultaneously.
We propose Structured Low-rank Matrix Recovery (SLMR), a unique method of effectively removing view discrepancy and improving discriminancy.
arXiv Detail & Related papers (2020-03-22T03:57:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.