Generalized Nonnegative Structured Kruskal Tensor Regression
- URL: http://arxiv.org/abs/2509.19900v1
- Date: Wed, 24 Sep 2025 08:51:38 GMT
- Title: Generalized Nonnegative Structured Kruskal Tensor Regression
- Authors: Xinjue Wang, Esa Ollila, Sergiy A. Vorobyov, Ammar Mian,
- Abstract summary: Generalized Nonnegative Structured Kruskal Regression (NS-KTR) is a novel tensor regression framework.<n>It enhances interpretability and performance through mode-specific hybrid regularization and nonnegativity constraints.
- Score: 22.300007523556022
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces Generalized Nonnegative Structured Kruskal Tensor Regression (NS-KTR), a novel tensor regression framework that enhances interpretability and performance through mode-specific hybrid regularization and nonnegativity constraints. Our approach accommodates both linear and logistic regression formulations for diverse response variables while addressing the structural heterogeneity inherent in multidimensional tensor data. We integrate fused LASSO, total variation, and ridge regularizers, each tailored to specific tensor modes, and develop an efficient alternating direction method of multipliers (ADMM) based algorithm for parameter estimation. Comprehensive experiments on synthetic signals and real hyperspectral datasets demonstrate that NS-KTR consistently outperforms conventional tensor regression methods. The framework's ability to preserve distinct structural characteristics across tensor dimensions while ensuring physical interpretability makes it especially suitable for applications in signal processing and hyperspectral image analysis.
Related papers
- Neural Optimal Transport Meets Multivariate Conformal Prediction [58.43397908730771]
We propose a framework for conditional vectorile regression (CVQR)<n>CVQR combines neural optimal transport with quantized optimization, and apply it to predictions.
arXiv Detail & Related papers (2025-09-29T19:50:19Z) - Kernel Regression of Multi-Way Data via Tensor Trains with Hadamard Overparametrization: The Dynamic Graph Flow Case [9.941965164307843]
Kernel Regression via Trains with Hadamard overparametrization (KReTTaH) is a regression-based framework for interpretable multi-way data imputation.<n>KReTTaH consistently outperforms state-of-the-art alternatives.
arXiv Detail & Related papers (2025-09-26T11:00:05Z) - Semi-parametric Functional Classification via Path Signatures Logistic Regression [1.210026603224224]
We propose Path Signatures Logistic Regression, a semi-parametric framework for classifying vector-valued functional data.<n>Our results highlight the practical and theoretical benefits of integrating rough path theory into modern functional data analysis.
arXiv Detail & Related papers (2025-07-09T08:06:50Z) - Low-Rank Implicit Neural Representation via Schatten-p Quasi-Norm and Jacobian Regularization [49.158601255093416]
We propose a CP-based low-rank tensor function parameterized by neural networks for implicit neural representation.<n>For smoothness, we propose a regularization term based on the spectral norm of the Jacobian and Hutchinson's trace estimator.<n>Our proposed smoothness regularization is SVD-free and avoids explicit chain rule derivations.
arXiv Detail & Related papers (2025-06-27T11:23:10Z) - Identifiable Convex-Concave Regression via Sub-gradient Regularised Least Squares [1.9580473532948397]
We propose a novel nonparametric regression method that models complex input-relationships as the sum of convex and concave components.<n>The method-ICCNLS-decomposes sub-constrained shape-constrained additive decomposition.
arXiv Detail & Related papers (2025-06-22T15:53:12Z) - A Simplified Analysis of SGD for Linear Regression with Weight Averaging [64.2393952273612]
Recent work bycitetzou 2021benign provides sharp rates for SGD optimization in linear regression using constant learning rate.<n>We provide a simplified analysis recovering the same bias and variance bounds provided incitepzou 2021benign based on simple linear algebra tools.<n>We believe our work makes the analysis of gradient descent on linear regression very accessible and will be helpful in further analyzing mini-batching and learning rate scheduling.
arXiv Detail & Related papers (2025-06-18T15:10:38Z) - Asymptotics of Linear Regression with Linearly Dependent Data [28.005935031887038]
We study the computations of linear regression in settings with non-Gaussian covariates.<n>We show how dependencies influence estimation error and the choice of regularization parameters.
arXiv Detail & Related papers (2024-12-04T20:31:47Z) - An In-depth Investigation of Sparse Rate Reduction in Transformer-like Models [32.04194224236952]
We propose an information-theoretic objective function called Sparse Rate Reduction (SRR)
We show that SRR has a positive correlation coefficient and outperforms other baseline measures, such as path-norm and sharpness-based ones.
We show that generalization can be improved using SRR as regularization on benchmark image classification datasets.
arXiv Detail & Related papers (2024-11-26T07:44:57Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
We present a unifying perspective on recent results on ridge regression.<n>We use the basic tools of random matrix theory and free probability, aimed at readers with backgrounds in physics and deep learning.<n>Our results extend and provide a unifying perspective on earlier models of scaling laws.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Tensor-on-Tensor Regression: Riemannian Optimization,
Over-parameterization, Statistical-computational Gap, and Their Interplay [9.427635404752936]
We study the tensor-on-tensor regression, where the goal is to connect tensor responses to tensor covariates with a low Tucker rank parameter/matrix.
We propose two methods to cope with the challenge of unknown rank.
We provide the first convergence guarantee for the general tensor-on-tensor regression.
arXiv Detail & Related papers (2022-06-17T13:15:27Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - CASTLE: Regularization via Auxiliary Causal Graph Discovery [89.74800176981842]
We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables.
CASTLE efficiently reconstructs only the features in the causal DAG that have a causal neighbor, whereas reconstruction-based regularizers suboptimally reconstruct all input features.
arXiv Detail & Related papers (2020-09-28T09:49:38Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.