On the Counting of Involutory MDS Matrices
- URL: http://arxiv.org/abs/2310.00090v2
- Date: Tue, 2 Jul 2024 16:26:39 GMT
- Title: On the Counting of Involutory MDS Matrices
- Authors: Susanta Samanta,
- Abstract summary: This paper enumerates Hadamard MDS and involutory Hadamard MDS matrices of order $4$ within the field $mathbbF_2r$.
An upper bound is derived for the number of all involutory MDS matrices of order $4$ over $mathbbF_2r$.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The optimal branch number of MDS matrices has established their importance in designing diffusion layers for various block ciphers and hash functions. As a result, numerous matrix structures, including Hadamard and circulant matrices, have been proposed for constructing MDS matrices. Also, in the literature, significant attention is typically given to identifying MDS candidates with optimal implementations or proposing new constructions across different orders. However, this paper takes a different approach by not emphasizing efficiency issues or introducing novel constructions. Instead, its primary objective is to enumerate Hadamard MDS and involutory Hadamard MDS matrices of order $4$ within the field $\mathbb{F}_{2^r}$. Specifically, it provides an explicit formula for the count of both Hadamard MDS and involutory Hadamard MDS matrices of order $4$ over $\mathbb{F}_{2^r}$. Additionally, the paper presents the counts of order $2$ MDS matrices and order $2$ involutory MDS matrices over $\mathbb{F}_{2^r}$. Finally, leveraging these counts of order $2$ matrices, an upper bound is derived for the number of all involutory MDS matrices of order $4$ over $\mathbb{F}_{2^r}$.
Related papers
- Understanding Matrix Function Normalizations in Covariance Pooling through the Lens of Riemannian Geometry [63.694184882697435]
Global Covariance Pooling (GCP) has been demonstrated to improve the performance of Deep Neural Networks (DNNs) by exploiting second-order statistics of high-level representations.
arXiv Detail & Related papers (2024-07-15T07:11:44Z) - On MDS Property of g-Circulant Matrices [3.069335774032178]
We first discuss $g$-circulant matrices with involutory and MDS properties.
We then delve into $g$-circulant semi-involutory and semi-orthogonal matrices with entries from finite fields.
arXiv Detail & Related papers (2024-06-22T15:18:31Z) - A Characterization of Semi-Involutory MDS Matrices [3.069335774032178]
In symmetric cryptography, maximum distance separable (MDS) matrices with computationally simple inverses have wide applications.
Many block ciphers like AES, SQUARE, SHARK, and hash functions like PHOTON use an MDS matrix in the diffusion layer.
arXiv Detail & Related papers (2024-06-18T17:57:46Z) - A Systematic Construction Approach for All $4\times 4$ Involutory MDS Matrices [1.3332839594069594]
We present several characterizations of involutory MDS matrices of even order.
We propose a technique to systematically construct all $4 times 4$ involutory MDS matrices over a finite field.
arXiv Detail & Related papers (2024-04-12T05:37:42Z) - Construction of all MDS and involutory MDS matrices [7.171901763517741]
We propose two algorithms for a hybrid construction of all $ntimes n$ MDS and involutory MDS matrices over a finite field $mathbbF_pm$.
The proposed algorithms effectively narrow down the search space to identify $(n-1) times (n-1)$ MDS matrices.
arXiv Detail & Related papers (2024-03-15T15:03:02Z) - Large-scale gradient-based training of Mixtures of Factor Analyzers [67.21722742907981]
This article contributes both a theoretical analysis as well as a new method for efficient high-dimensional training by gradient descent.
We prove that MFA training and inference/sampling can be performed based on precision matrices, which does not require matrix inversions after training is completed.
Besides the theoretical analysis and matrices, we apply MFA to typical image datasets such as SVHN and MNIST, and demonstrate the ability to perform sample generation and outlier detection.
arXiv Detail & Related papers (2023-08-26T06:12:33Z) - On the Direct Construction of MDS and Near-MDS Matrices [6.356706243374352]
Near-MDS (NMDS) matrices with sub-optimal branch numbers offer a better balance between security and efficiency as a diffusion layer.
This paper introduces some direct constructions of NMDS matrices in both nonrecursive and recursion settings.
We propose a method for constructing involutory MDS and NMDS matrices using generalized Vandermonde matrices.
arXiv Detail & Related papers (2023-06-22T12:43:53Z) - One-sided Matrix Completion from Two Observations Per Row [95.87811229292056]
We propose a natural algorithm that involves imputing the missing values of the matrix $XTX$.
We evaluate our algorithm on one-sided recovery of synthetic data and low-coverage genome sequencing.
arXiv Detail & Related papers (2023-06-06T22:35:16Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - Sparsifying Parity-Check Matrices [60.28601275219819]
We consider the problem of minimizing the number of one-entries in parity-check matrices.
In the maximum-likelihood (ML) decoding method, the number of ones in PCMs is directly related to the time required to decode messages.
We propose a simple matrix row manipulation which alters the PCM, but not the code itself.
arXiv Detail & Related papers (2020-05-08T05:51:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.