On the Masking-Friendly Designs for Post-Quantum Cryptography
- URL: http://arxiv.org/abs/2311.08040v1
- Date: Tue, 14 Nov 2023 10:00:58 GMT
- Title: On the Masking-Friendly Designs for Post-Quantum Cryptography
- Authors: Suparna Kundu, Angshuman Karmakar, Ingrid Verbauwhede,
- Abstract summary: Masking is a well-known and provably secure countermeasure against side-channel attacks.
The performance overhead of integrating masking countermeasures is heavily influenced by the design choices of a cryptographic algorithm.
We show that the design decisions have a significant impact on the efficiency of integrating masking countermeasures into lattice-based cryptography.
- Score: 5.781461941357047
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Masking is a well-known and provably secure countermeasure against side-channel attacks. However, due to additional redundant computations, integrating masking schemes is expensive in terms of performance. The performance overhead of integrating masking countermeasures is heavily influenced by the design choices of a cryptographic algorithm and is often not considered during the design phase. In this work, we deliberate on the effect of design choices on integrating masking techniques into lattice-based cryptography. We select Scabbard, a suite of three lattice-based post-quantum key-encapsulation mechanisms (KEM), namely Florete, Espada, and Sable. We provide arbitrary-order masked implementations of all the constituent KEMs of the Scabbard suite by exploiting their specific design elements. We show that the masked implementations of Florete, Espada, and Sable outperform the masked implementations of Kyber in terms of speed for any order masking. Masked Florete exhibits a $73\%$, $71\%$, and $70\%$ performance improvement over masked Kyber corresponding to the first-, second-, and third-order. Similarly, Espada exhibits $56\%$, $59\%$, and $60\%$ and Sable exhibits $75\%$, $74\%$, and $73\%$ enhanced performance for first-, second-, and third-order masking compared to Kyber respectively. Our results show that the design decisions have a significant impact on the efficiency of integrating masking countermeasures into lattice-based cryptography.
Related papers
- Masking Gaussian Elimination at Arbitrary Order, with Application to Multivariate- and Code-Based PQC [4.655421225385125]
We provide a masking scheme for Gaussian Elimination (GE) with back substitution to defend against first- and higher-order attacks.
We propose a masked algorithm for transforming a system of linear equations into row-echelon form.
We evaluate the overhead of our countermeasure for several post-quantum candidates and their different security levels.
arXiv Detail & Related papers (2024-10-31T14:01:02Z) - Efficiently Dispatching Flash Attention For Partially Filled Attention Masks [29.36452085947087]
Transformers are widely used across various applications, many of which yield sparse or partially filled attention matrices.
We introduce Binary Block Masking, a highly efficient modification that enhances Flash Attention by making it mask-aware.
Our experiments on attention masks derived from real-world scenarios demonstrate up to a 9x runtime improvement.
arXiv Detail & Related papers (2024-09-23T15:11:07Z) - ColorMAE: Exploring data-independent masking strategies in Masked AutoEncoders [53.3185750528969]
Masked AutoEncoders (MAE) have emerged as a robust self-supervised framework.
We introduce a data-independent method, termed ColorMAE, which generates different binary mask patterns by filtering random noise.
We demonstrate our strategy's superiority in downstream tasks compared to random masking.
arXiv Detail & Related papers (2024-07-17T22:04:00Z) - Should You Mask 15% in Masked Language Modeling? [86.91486000124156]
Masked language models conventionally use a masking rate of 15%.
We find that masking up to 40% of input tokens can outperform the 15% baseline.
arXiv Detail & Related papers (2022-02-16T11:42:34Z) - Mask Transfiner for High-Quality Instance Segmentation [95.74244714914052]
We present Mask Transfiner for high-quality and efficient instance segmentation.
Our approach only processes detected error-prone tree nodes and self-corrects their errors in parallel.
Our code and trained models will be available at http://vis.xyz/pub/transfiner.
arXiv Detail & Related papers (2021-11-26T18:58:22Z) - Image Inpainting by End-to-End Cascaded Refinement with Mask Awareness [66.55719330810547]
Inpainting arbitrary missing regions is challenging because learning valid features for various masked regions is nontrivial.
We propose a novel mask-aware inpainting solution that learns multi-scale features for missing regions in the encoding phase.
Our framework is validated both quantitatively and qualitatively via extensive experiments on three public datasets.
arXiv Detail & Related papers (2021-04-28T13:17:47Z) - Contrastive Context-Aware Learning for 3D High-Fidelity Mask Face
Presentation Attack Detection [103.7264459186552]
Face presentation attack detection (PAD) is essential to secure face recognition systems.
Most existing 3D mask PAD benchmarks suffer from several drawbacks.
We introduce a largescale High-Fidelity Mask dataset to bridge the gap to real-world applications.
arXiv Detail & Related papers (2021-04-13T12:48:38Z) - BoxInst: High-Performance Instance Segmentation with Box Annotations [102.10713189544947]
We present a high-performance method that can achieve mask-level instance segmentation with only bounding-box annotations for training.
Our core idea is to exploit the loss of learning masks in instance segmentation, with no modification to the segmentation network itself.
arXiv Detail & Related papers (2020-12-03T22:27:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.