DM-RSA: An Extension of RSA with Dual Modulus
- URL: http://arxiv.org/abs/2507.14197v1
- Date: Mon, 14 Jul 2025 14:09:53 GMT
- Title: DM-RSA: An Extension of RSA with Dual Modulus
- Authors: Andriamifidisoa Ramamonjy, Rufine Marius Lalasoa,
- Abstract summary: DM-RSA is a variant of the RSA cryptosystem that employs two distinct moduli symmetrically to enhance security.<n>By leveraging the Chinese Remainder Theorem (CRT) for decryption, DM-RSA provides increased robustness against side-channel attacks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce DM-RSA (Dual Modulus RSA), a variant of the RSA cryptosystem that employs two distinct moduli symmetrically to enhance security. By leveraging the Chinese Remainder Theorem (CRT) for decryption, DM-RSA provides increased robustness against side-channel attacks while preserving the efficiency of classical RSA. This approach improves resistance to partial compromise of a modulus and integrates easily into existing infrastructures.
Related papers
- Approximating Euler Totient Function using Linear Regression on RSA moduli [0.0]
The security of the RSA cryptosystem is based on the intractability of computing Euler's totient function phi(n) for large integers n.<n>In this work, we explore a machine learning approach to approximate Euler's totient function phi using linear regression models.<n>Preliminary results suggest that phi can be approximated within a small relative error margin, which may be sufficient to aid in certain classes of RSA attacks.
arXiv Detail & Related papers (2025-07-09T10:01:25Z) - Taming Polysemanticity in LLMs: Provable Feature Recovery via Sparse Autoencoders [50.52694757593443]
Existing SAE training algorithms often lack rigorous mathematical guarantees and suffer from practical limitations.<n>We first propose a novel statistical framework for the feature recovery problem, which includes a new notion of feature identifiability.<n>We introduce a new SAE training algorithm based on bias adaptation'', a technique that adaptively adjusts neural network bias parameters to ensure appropriate activation sparsity.
arXiv Detail & Related papers (2025-06-16T20:58:05Z) - A Geometric Square-Based Approach to RSA Integer Factorization [0.0]
We present a new approach to RSA factorization inspired by geometric interpretations and square differences.<n>This method reformulates the problem in terms of the distance between perfect squares and provides a recurrence relation that allows rapid convergence.
arXiv Detail & Related papers (2025-06-01T08:55:25Z) - Exploiting Mixture-of-Experts Redundancy Unlocks Multimodal Generative Abilities [69.26544016976396]
We exploit the redundancy within Mixture-of-Experts (MoEs) as a source of additional capacity for learning a new modality.<n>We preserve the original language generation capabilities by applying low-rank adaptation exclusively to the tokens of the new modality.
arXiv Detail & Related papers (2025-03-28T15:21:24Z) - Large Language Diffusion Models [77.02553707673418]
Autoregressive models (ARMs) are widely regarded as the cornerstone of large language models (LLMs)<n>We introduce LLaDA, a diffusion model trained from scratch under the pre-training and supervised fine-tuning paradigm.<n>Across extensive benchmarks, LLaDA demonstrates strong scalability, outperforming our self-constructed ARM baselines.
arXiv Detail & Related papers (2025-02-14T08:23:51Z) - Segmenting Action-Value Functions Over Time-Scales in SARSA via TD($Δ$) [0.0]
This study expands the temporal difference decomposition approach, TD($Delta$), to the SARSA algorithm.<n> TD($Delta$) facilitates learning over several time-scales by breaking the action-value function into components associated with distinct discount factors.
arXiv Detail & Related papers (2024-11-22T07:52:28Z) - M2CVD: Enhancing Vulnerability Semantic through Multi-Model Collaboration for Code Vulnerability Detection [52.4455893010468]
Large Language Models (LLMs) have strong capabilities in code comprehension, but fine-tuning costs and semantic alignment issues limit their project-specific optimization.
Code models such CodeBERT are easy to fine-tune, but it is often difficult to learn vulnerability semantics from complex code languages.
This paper introduces the Multi-Model Collaborative Vulnerability Detection approach (M2CVD) to improve the detection accuracy of code models.
arXiv Detail & Related papers (2024-06-10T00:05:49Z) - Two RSA-based Cryptosystems [0.0]
The cryptosystem RSA is a very popular cryptosystem in the study of Cryptography.
In this article, we explore how the idea of a primitive mth root of unity in a ring can be integrated into the Discrete Fourier Transform.
arXiv Detail & Related papers (2024-05-17T18:35:29Z) - SOCI^+: An Enhanced Toolkit for Secure OutsourcedComputation on Integers [50.608828039206365]
We propose SOCI+ which significantly improves the performance of SOCI.
SOCI+ employs a novel (2, 2)-threshold Paillier cryptosystem with fast encryption and decryption as its cryptographic primitive.
Compared with SOCI, our experimental evaluation shows that SOCI+ is up to 5.4 times more efficient in computation and 40% less in communication overhead.
arXiv Detail & Related papers (2023-09-27T05:19:32Z) - Publicly-Verifiable Deletion via Target-Collapsing Functions [81.13800728941818]
We show that targetcollapsing enables publiclyverifiable deletion (PVD)
We build on this framework to obtain a variety of primitives supporting publiclyverifiable deletion from weak cryptographic assumptions.
arXiv Detail & Related papers (2023-03-15T15:00:20Z) - RSA+: An RSA variant [0.0]
We introduce a new probabilistic public-key cryptosystem which combines the main ingredients of the well-known RSA and Rabin cryptosystems.
We investigate the security and performance of our new scheme in comparison to the other two.
arXiv Detail & Related papers (2022-12-31T02:48:17Z) - On the Convergence of SARSA with Linear Function Approximation [28.48689596152752]
SARSA is a classical on-policy control algorithm for reinforcement learning.
We show how fast SARSA converges to a bounded region.
We characterizes the behavior of linear SARSA for a new regime.
arXiv Detail & Related papers (2022-02-14T16:04:40Z) - Virtual Data Augmentation: A Robust and General Framework for
Fine-tuning Pre-trained Models [51.46732511844122]
Powerful pre-trained language models (PLM) can be fooled by small perturbations or intentional attacks.
We present Virtual Data Augmentation (VDA), a general framework for robustly fine-tuning PLMs.
Our approach is able to improve the robustness of PLMs and alleviate the performance degradation under adversarial attacks.
arXiv Detail & Related papers (2021-09-13T09:15:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.