Constructions of Efficiently Implementable Boolean functions Possessing High Nonlinearity and Good Resistance to Algebraic Attacks
- URL: http://arxiv.org/abs/2408.11583v1
- Date: Wed, 21 Aug 2024 12:46:50 GMT
- Title: Constructions of Efficiently Implementable Boolean functions Possessing High Nonlinearity and Good Resistance to Algebraic Attacks
- Authors: Claude Carlet, Palash Sarkar,
- Abstract summary: We describe two new classes of functions which provide the best known trade-offs between low computational complexity, nonlinearity and (fast) algebraic immunity.
Appropriately chosen functions from the two new classes provide excellent solutions to the problem of designing filtering functions for use in the nonlinear filter model of stream ciphers.
- Score: 28.8640336189986
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We describe two new classes of functions which provide the presently best known trade-offs between low computational complexity, nonlinearity and (fast) algebraic immunity. The nonlinearity and (fast) algebraic immunity of the new functions substantially improve upon those properties of all previously known efficiently implementable functions. Appropriately chosen functions from the two new classes provide excellent solutions to the problem of designing filtering functions for use in the nonlinear filter model of stream ciphers, or in any other stream ciphers using Boolean functions for ensuring confusion. In particular, for $n\leq 20$, we show that there are functions in our first family whose implementation efficiences are significantly lower than all previously known functions achieving a comparable combination of nonlinearity and (fast) algebraic immunity. Given positive integers $\ell$ and $\delta$, it is possible to choose a function from our second family whose linear bias is provably at most $2^{-\ell}$, fast algebraic immunity is at least $\delta$ (based on conjecture which is well supported by experimental results), and which can be implemented in time and space which is linear in $\ell$ and $\delta$. Further, the functions in our second family are built using homomorphic friendly operations, making these functions well suited for the application of transciphering.
Related papers
- OPAF: Optimized Secure Two-Party Computation Protocols for Nonlinear Activation Functions in Recurrent Neural Network [8.825150825838769]
This paper pays special attention to the implementation of non-linear functions in semi-honest model with two-party settings.
We propose a novel and efficient protocol for exponential function by using a divide-and-conquer strategy.
Next, we take advantage of the symmetry of sigmoid and Tanh, and fine-tune the inputs to reduce the 2PC building blocks.
arXiv Detail & Related papers (2024-03-01T02:49:40Z) - A Nearly Optimal and Low-Switching Algorithm for Reinforcement Learning
with General Function Approximation [66.26739783789387]
We propose a new algorithm, Monotonic Q-Learning with Upper Confidence Bound (MQL-UCB) for reinforcement learning.
MQL-UCB achieves minimax optimal regret of $tildeO(dsqrtHK)$ when $K$ is sufficiently large and near-optimal policy switching cost.
Our work sheds light on designing provably sample-efficient and deployment-efficient Q-learning with nonlinear function approximation.
arXiv Detail & Related papers (2023-11-26T08:31:57Z) - Efficient uniform approximation using Random Vector Functional Link
networks [0.0]
A Random Vector Functional Link (RVFL) network is a depth-2 neural network with random inner nodes and biases.
We show that an RVFL with ReLU activation can approximate the Lipschitz target function.
Our method of proof is rooted in theory and harmonic analysis.
arXiv Detail & Related papers (2023-06-30T09:25:03Z) - Approximation of Nonlinear Functionals Using Deep ReLU Networks [7.876115370275732]
We investigate the approximation power of functional deep neural networks associated with the rectified linear unit (ReLU) activation function.
In addition, we establish rates of approximation of the proposed functional deep ReLU networks under mild regularity conditions.
arXiv Detail & Related papers (2023-04-10T08:10:11Z) - Digging Deeper: Operator Analysis for Optimizing Nonlinearity of Boolean
Functions [8.382710169577447]
We investigate the effects of genetic operators for bit-string encoding in optimizing nonlinearity.
By observing the range of possible changes an operator can provide, one can use this information to design a more effective combination of genetic operators.
arXiv Detail & Related papers (2023-02-12T10:34:01Z) - Provable General Function Class Representation Learning in Multitask
Bandits and MDPs [58.624124220900306]
multitask representation learning is a popular approach in reinforcement learning to boost the sample efficiency.
In this work, we extend the analysis to general function class representations.
We theoretically validate the benefit of multitask representation learning within general function class for bandits and linear MDP.
arXiv Detail & Related papers (2022-05-31T11:36:42Z) - Submodular + Concave [53.208470310734825]
It has been well established that first order optimization methods can converge to the maximal objective value of concave functions.
In this work, we initiate the determinant of the smooth functions convex body $$F(x) = G(x) +C(x)$.
This class of functions is an extension of both concave and continuous DR-submodular functions for which no guarantee is known.
arXiv Detail & Related papers (2021-06-09T01:59:55Z) - Piecewise Linear Regression via a Difference of Convex Functions [50.89452535187813]
We present a new piecewise linear regression methodology that utilizes fitting a difference of convex functions (DC functions) to the data.
We empirically validate the method, showing it to be practically implementable, and to have comparable performance to existing regression/classification methods on real-world datasets.
arXiv Detail & Related papers (2020-07-05T18:58:47Z) - Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions [84.49087114959872]
We provide the first non-asymptotic analysis for finding stationary points of nonsmooth, nonsmooth functions.
In particular, we study Hadamard semi-differentiable functions, perhaps the largest class of nonsmooth functions.
arXiv Detail & Related papers (2020-02-10T23:23:04Z) - Invariant Feature Coding using Tensor Product Representation [75.62232699377877]
We prove that the group-invariant feature vector contains sufficient discriminative information when learning a linear classifier.
A novel feature model that explicitly consider group action is proposed for principal component analysis and k-means clustering.
arXiv Detail & Related papers (2019-06-05T07:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.