On the Direct Construction of MDS and Near-MDS Matrices
- URL: http://arxiv.org/abs/2306.12848v3
- Date: Mon, 8 Apr 2024 09:09:25 GMT
- Title: On the Direct Construction of MDS and Near-MDS Matrices
- Authors: Kishan Chand Gupta, Sumit Kumar Pandey, Susanta Samanta,
- Abstract summary: Near-MDS (NMDS) matrices with sub-optimal branch numbers offer a better balance between security and efficiency as a diffusion layer.
This paper introduces some direct constructions of NMDS matrices in both nonrecursive and recursion settings.
We propose a method for constructing involutory MDS and NMDS matrices using generalized Vandermonde matrices.
- Score: 6.356706243374352
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The optimal branch number of MDS matrices makes them a preferred choice for designing diffusion layers in many block ciphers and hash functions. Consequently, various methods have been proposed for designing MDS matrices, including search and direct methods. While exhaustive search is suitable for small order MDS matrices, direct constructions are preferred for larger orders due to the vast search space involved. In the literature, there has been extensive research on the direct construction of MDS matrices using both recursive and nonrecursive methods. On the other hand, in lightweight cryptography, Near-MDS (NMDS) matrices with sub-optimal branch numbers offer a better balance between security and efficiency as a diffusion layer compared to MDS matrices. However, no direct construction method is available in the literature for constructing recursive NMDS matrices. This paper introduces some direct constructions of NMDS matrices in both nonrecursive and recursive settings. Additionally, it presents some direct constructions of nonrecursive MDS matrices from the generalized Vandermonde matrices. We propose a method for constructing involutory MDS and NMDS matrices using generalized Vandermonde matrices. Furthermore, we prove some folklore results that are used in the literature related to the NMDS code.
Related papers
- On the construction of ultra-light MDS matrices [4.739812980667592]
We present an algorithm that efficiently enumerates all the lightest MDS matrices based on the word representation.
We craft some involution $4times 4$ MDS matrices with a mere 68 XOR gates.
These findings outperform the current state-of-the-art.
arXiv Detail & Related papers (2024-09-05T07:07:41Z) - A Systematic Construction Approach for All $4\times 4$ Involutory MDS Matrices [1.3332839594069594]
We present several characterizations of involutory MDS matrices of even order.
We propose a technique to systematically construct all $4 times 4$ involutory MDS matrices over a finite field.
arXiv Detail & Related papers (2024-04-12T05:37:42Z) - Support matrix machine: A review [0.0]
Support matrix machine (SMM) represents one of the emerging methodologies tailored for handling matrix input data.
This article provides the first in-depth analysis of the development of the SMM model.
We discuss numerous SMM variants, such as robust, sparse, class imbalance, and multi-class classification models.
arXiv Detail & Related papers (2023-10-30T16:46:23Z) - Spectral Entry-wise Matrix Estimation for Low-Rank Reinforcement
Learning [53.445068584013896]
We study matrix estimation problems arising in reinforcement learning (RL) with low-rank structure.
In low-rank bandits, the matrix to be recovered specifies the expected arm rewards, and for low-rank Markov Decision Processes (MDPs), it may for example characterize the transition kernel of the MDP.
We show that simple spectral-based matrix estimation approaches efficiently recover the singular subspaces of the matrix and exhibit nearly-minimal entry-wise error.
arXiv Detail & Related papers (2023-10-10T17:06:41Z) - On the Counting of Involutory MDS Matrices [0.0]
This paper enumerates Hadamard MDS and involutory Hadamard MDS matrices of order $4$ within the field $mathbbF_2r$.
It also derives the count of Hadamard-MDS (NMDS) and involutory Hadamard NMDS matrices, each with exactly one zero in each row, of order $4$ over $mathbbF_2r$.
arXiv Detail & Related papers (2023-09-29T18:57:00Z) - Large-scale gradient-based training of Mixtures of Factor Analyzers [67.21722742907981]
This article contributes both a theoretical analysis as well as a new method for efficient high-dimensional training by gradient descent.
We prove that MFA training and inference/sampling can be performed based on precision matrices, which does not require matrix inversions after training is completed.
Besides the theoretical analysis and matrices, we apply MFA to typical image datasets such as SVHN and MNIST, and demonstrate the ability to perform sample generation and outlier detection.
arXiv Detail & Related papers (2023-08-26T06:12:33Z) - Sufficient dimension reduction for feature matrices [3.04585143845864]
We propose a method called principal support matrix machine (PSMM) for the matrix sufficient dimension reduction.
Our numerical analysis demonstrates that the PSMM outperforms existing methods and has strong interpretability in real data applications.
arXiv Detail & Related papers (2023-03-07T23:16:46Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Positive Semidefinite Matrix Factorization: A Connection with Phase
Retrieval and Affine Rank Minimization [71.57324258813674]
We show that PSDMF algorithms can be designed based on phase retrieval (PR) and affine rank minimization (ARM) algorithms.
Motivated by this idea, we introduce a new family of PSDMF algorithms based on iterative hard thresholding (IHT)
arXiv Detail & Related papers (2020-07-24T06:10:19Z) - Sparsifying Parity-Check Matrices [60.28601275219819]
We consider the problem of minimizing the number of one-entries in parity-check matrices.
In the maximum-likelihood (ML) decoding method, the number of ones in PCMs is directly related to the time required to decode messages.
We propose a simple matrix row manipulation which alters the PCM, but not the code itself.
arXiv Detail & Related papers (2020-05-08T05:51:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.