Using Modular Arithmetic Optimized Neural Networks To Crack Affine Cryptographic Schemes Efficiently
- URL: http://arxiv.org/abs/2507.14229v1
- Date: Thu, 17 Jul 2025 04:54:10 GMT
- Title: Using Modular Arithmetic Optimized Neural Networks To Crack Affine Cryptographic Schemes Efficiently
- Authors: Vanja Stojanović, Žiga Lesar, CIril Bohak,
- Abstract summary: We investigate the cryptanalysis of affine ciphers using a hybrid neural network architecture.<n>Our approach integrates a modular branch that processes raw ciphertext sequences and a statistical branch that leverages letter frequency features.
- Score: 0.27309692684728615
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate the cryptanalysis of affine ciphers using a hybrid neural network architecture that combines modular arithmetic-aware and statistical feature-based learning. Inspired by recent advances in interpretable neural networks for modular arithmetic and neural cryptanalysis of classical ciphers, our approach integrates a modular branch that processes raw ciphertext sequences and a statistical branch that leverages letter frequency features. Experiments on datasets derived from natural English text demonstrate that the hybrid model attains high key recovery accuracy for short and moderate ciphertexts, outperforming purely statistical approaches for the affine cipher. However, performance degrades for very long ciphertexts, highlighting challenges in model generalization.
Related papers
- A New Approach in Cryptanalysis Through Combinatorial Equivalence of Cryptosystems [0.0]
We propose a new approach in cryptanalysis based on an evolution of the concept of textitCombinatorial Equivalence.<n>The aim is to rewrite a cryptosystem under aly equivalent form in order to make appear new properties that are more strongly discriminating the secret key used during encryption.
arXiv Detail & Related papers (2026-02-16T08:07:41Z) - Privacy-Preserving Spiking Neural Networks: A Deep Dive into Encryption Parameter Optimisation [1.2725257829111285]
Spiking Neural Networks (SNNs) mimic the brain's event-driven behaviour, offering improved performance and reduced power use.<n>BioEncryptSNN is a spiking neural network based encryption-decryption framework for secure and noise-resilient data protection.
arXiv Detail & Related papers (2025-10-22T12:43:46Z) - Unlocking Symbol-Level Precoding Efficiency Through Tensor Equivariant Neural Network [84.22115118596741]
We propose an end-to-end deep learning (DL) framework with low inference complexity for symbol-level precoding.<n>We show that the proposed framework captures substantial performance gains of optimal SLP, while achieving an approximately 80-times speedup over conventional methods.
arXiv Detail & Related papers (2025-10-02T15:15:50Z) - ALICE: An Interpretable Neural Architecture for Generalization in Substitution Ciphers [0.3403377445166164]
We present cryptogram solving as an ideal testbed for studying neural network reasoning and generalization.<n>We develop ALICE, a simple encoder-only Transformer that sets a new state-of-the-art for both accuracy and speed on this decryption problem.<n>Surprisingly, ALICE generalizes to unseen ciphers after training on only $sim1500$ unique ciphers.
arXiv Detail & Related papers (2025-09-08T23:33:53Z) - Keyed Chaotic Dynamics for Privacy-Preserving Neural Inference [0.0]
This work introduces a novel encryption method for ensuring the security of neural inference.<n>By constructing key-conditioned chaotic graph dynamical systems, we enable the encryption and decryption of real-valued tensors within the neural architecture.
arXiv Detail & Related papers (2025-05-29T17:05:42Z) - Cryptanalysis via Machine Learning Based Information Theoretic Metrics [58.96805474751668]
We propose two novel applications of machine learning (ML) algorithms to perform cryptanalysis on any cryptosystem.<n>These algorithms can be readily applied in an audit setting to evaluate the robustness of a cryptosystem.<n>We show that our classification model correctly identifies the encryption schemes that are not IND-CPA secure, such as DES, RSA, and AES ECB, with high accuracy.
arXiv Detail & Related papers (2025-01-25T04:53:36Z) - Convergence Analysis for Deep Sparse Coding via Convolutional Neural Networks [7.956678963695681]
We explore intersections between sparse coding and deep learning to enhance our understanding of feature extraction capabilities.<n>We derive convergence rates for convolutional neural networks (CNNs) in their ability to extract sparse features.<n>Inspired by the strong connection between sparse coding and CNNs, we explore training strategies to encourage neural networks to learn more sparse features.
arXiv Detail & Related papers (2024-08-10T12:43:55Z) - Tram: A Token-level Retrieval-augmented Mechanism for Source Code Summarization [76.57699934689468]
We propose a fine-grained Token-level retrieval-augmented mechanism (Tram) on the decoder side to enhance the performance of neural models.
To overcome the challenge of token-level retrieval in capturing contextual code semantics, we also propose integrating code semantics into individual summary tokens.
arXiv Detail & Related papers (2023-05-18T16:02:04Z) - Understanding the Mapping of Encode Data Through An Implementation of
Quantum Topological Analysis [0.7106986689736827]
We show the difference in encoding techniques can be visualized by investigating the topology of the data embedded in complex Hilbert space.
Our results suggest the encoding method needs to be considered carefully within different quantum machine learning models.
arXiv Detail & Related papers (2022-09-21T18:46:08Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - Generative Deep Learning Techniques for Password Generation [0.5249805590164902]
We study a broad collection of deep learning and probabilistic based models in the light of password guessing.
We provide novel generative deep-learning models in terms of variational autoencoders exhibiting state-of-art sampling performance.
We perform a thorough empirical analysis in a unified controlled framework over well-known datasets.
arXiv Detail & Related papers (2020-12-10T14:11:45Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Cryptotree: fast and accurate predictions on encrypted structured data [0.0]
Homomorphic Encryption (HE) is acknowledged for its ability to allow computation on encrypted data, where both the input and output are encrypted.
We propose Cryptotree, a framework that enables the use of Random Forests (RF), a very powerful learning procedure compared to linear regression.
arXiv Detail & Related papers (2020-06-15T11:48:01Z) - Improved Code Summarization via a Graph Neural Network [96.03715569092523]
In general, source code summarization techniques use the source code as input and outputs a natural language description.
We present an approach that uses a graph-based neural architecture that better matches the default structure of the AST to generate these summaries.
arXiv Detail & Related papers (2020-04-06T17:36:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.