Exploiting the Potential of Linearity in Automatic Differentiation and Computational Cryptography
- URL: http://arxiv.org/abs/2510.17220v1
- Date: Mon, 20 Oct 2025 07:02:48 GMT
- Title: Exploiting the Potential of Linearity in Automatic Differentiation and Computational Cryptography
- Authors: Giulia Giusti,
- Abstract summary: The concept of linearity plays a central role in both mathematics and computer science.<n>This thesis explores the use of Linear Logic (LL) to model programming paradigms based on linearity.<n>It comprises two parts: ADLL and CryptoBLL.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The concept of linearity plays a central role in both mathematics and computer science, with distinct yet complementary meanings. In mathematics, linearity underpins functions and vector spaces, forming the foundation of linear algebra and functional analysis. In computer science, it relates to resource-sensitive computation. Linear Logic (LL), for instance, models assumptions that must be used exactly once, providing a natural framework for tracking computational resources such as time, memory, or data access. This dual perspective makes linearity essential to programming languages, type systems, and formal models that express both computational complexity and composability. Bridging these interpretations enables rigorous yet practical methodologies for analyzing and verifying complex systems. This thesis explores the use of LL to model programming paradigms based on linearity. It comprises two parts: ADLL and CryptoBLL. The former applies LL to Automatic Differentiation (AD), modeling linear functions over the reals and the transposition operation. The latter uses LL to express complexity constraints on adversaries in computational cryptography. In AD, two main approaches use linear type systems: a theoretical one grounded in proof theory, and a practical one implemented in JAX, a Python library developed by Google for machine learning research. In contrast, frameworks like PyTorch and TensorFlow support AD without linear types. ADLL aims to bridge theory and practice by connecting JAX's type system to LL. In modern cryptography, several calculi aim to model cryptographic proofs within the computational paradigm. These efforts face a trade-off between expressiveness, to capture reductions, and simplicity, to abstract probability and complexity. CryptoBLL addresses this tension by proposing a framework for the automatic analysis of protocols in computational cryptography.
Related papers
- Generative Logic: A New Computer Architecture for Deterministic Reasoning and Knowledge Generation [0.0]
Generative Logic (GL) is a deterministic architecture that starts from usersupplied axiomatic definitions.<n>GL enumerates conjectures, applies normalization, type, and CE filter, and automatically reconstructs machine-checkable proofs.
arXiv Detail & Related papers (2025-07-25T17:29:19Z) - An Approximation Theory Perspective on Machine Learning [1.2289361708127877]
We will discuss emerging trends in machine learning including the role of shallow/deep networks.<n>Despite function approximation being a fundamental problem in machine learning, approximation theory does not play a central role in the theoretical foundations of the field.
arXiv Detail & Related papers (2025-06-02T18:50:18Z) - Learning Linear Attention in Polynomial Time [127.14106124669645]
We provide the first results on learnability of single-layer Transformers with linear attention.<n>We show that linear attention may be viewed as a linear predictor in a suitably defined RKHS.<n>We show how to efficiently identify training datasets for which every empirical riskr is equivalent to the linear Transformer.
arXiv Detail & Related papers (2024-10-14T02:41:01Z) - CoLA: Exploiting Compositional Structure for Automatic and Efficient
Numerical Linear Algebra [62.37017125812101]
We propose a simple but general framework for large-scale linear algebra problems in machine learning, named CoLA.
By combining a linear operator abstraction with compositional dispatch rules, CoLA automatically constructs memory and runtime efficient numerical algorithms.
We showcase its efficacy across a broad range of applications, including partial differential equations, Gaussian processes, equivariant model construction, and unsupervised learning.
arXiv Detail & Related papers (2023-09-06T14:59:38Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - The Dual PC Algorithm and the Role of Gaussianity for Structure Learning
of Bayesian Networks [0.0]
We show that the dual PC algorithm outperforms the classic PC algorithm in terms of run time and in recovering the underlying network structure.
We also show that the dual PC algorithm applies for Gaussian copula models, and demonstrate its performance in that setting.
arXiv Detail & Related papers (2021-12-16T17:27:29Z) - Statistically Meaningful Approximation: a Case Study on Approximating
Turing Machines with Transformers [50.85524803885483]
This work proposes a formal definition of statistically meaningful (SM) approximation which requires the approximating network to exhibit good statistical learnability.
We study SM approximation for two function classes: circuits and Turing machines.
arXiv Detail & Related papers (2021-07-28T04:28:55Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - Tensor Relational Algebra for Machine Learning System Design [7.764107702934616]
We present an alternative implementation abstraction called the relational tensor algebra (TRA)
TRA is a set-based algebra based on the relational algebra.
Our empirical study shows that the optimized TRA-based back-end can significantly outperform alternatives for running ML in distributed clusters.
arXiv Detail & Related papers (2020-09-01T15:51:24Z) - A Survey on Large-scale Machine Learning [67.6997613600942]
Machine learning can provide deep insights into data, allowing machines to make high-quality predictions.
Most sophisticated machine learning approaches suffer from huge time costs when operating on large-scale data.
Large-scale Machine Learning aims to learn patterns from big data with comparable performance efficiently.
arXiv Detail & Related papers (2020-08-10T06:07:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.