Use of Simple Arithmetic Operations to Construct Efficiently Implementable Boolean functions Possessing High Nonlinearity and Good Resistance to Algebraic Attacks
- URL: http://arxiv.org/abs/2408.11583v2
- Date: Sun, 12 Jan 2025 11:09:49 GMT
- Title: Use of Simple Arithmetic Operations to Construct Efficiently Implementable Boolean functions Possessing High Nonlinearity and Good Resistance to Algebraic Attacks
- Authors: Claude Carlet, Palash Sarkar,
- Abstract summary: We show that there are functions in the family achieving a combination of nonlinearity and (fast) algebraic immunity.
The main novelty of our approach is to apply a judicious combination of simple integer and binary field arithmetic to Boolean function construction.
- Score: 28.8640336189986
- License:
- Abstract: We describe a new class of Boolean functions which provide the presently best known trade-off between low computational complexity, nonlinearity and (fast) algebraic immunity. In particular, for $n\leq 20$, we show that there are functions in the family achieving a combination of nonlinearity and (fast) algebraic immunity which is superior to what is achieved by any other efficiently implementable function. The main novelty of our approach is to apply a judicious combination of simple integer and binary field arithmetic to Boolean function construction.
Related papers
- A Novel Unified Parametric Assumption for Nonconvex Optimization [53.943470475510196]
Non optimization is central to machine learning, but the general framework non convexity enables weak convergence guarantees too pessimistic compared to the other hand.
We introduce a novel unified assumption in non convex algorithms.
arXiv Detail & Related papers (2025-02-17T21:25:31Z) - Degree is Important: On Evolving Homogeneous Boolean Functions [32.90791284928444]
This paper investigates the use of Evolutionary Algorithms to design homogeneous bent Boolean functions.
While EAs manage to find quadratic homogeneous bent functions, none of the approaches result in cubic homogeneous bent functions.
arXiv Detail & Related papers (2025-01-30T15:04:14Z) - Extension of Symmetrized Neural Network Operators with Fractional and Mixed Activation Functions [0.0]
We propose a novel extension to symmetrized neural network operators by incorporating fractional and mixed activation functions.
Our framework introduces a fractional exponent in the activation functions, allowing adaptive non-linear approximations with improved accuracy.
arXiv Detail & Related papers (2025-01-17T14:24:25Z) - Operator Learning Using Random Features: A Tool for Scientific Computing [3.745868534225104]
Supervised operator learning centers on the use of training data to estimate maps between infinite-dimensional spaces.
This paper introduces the function-valued random features method.
It leads to a supervised operator learning architecture that is practical for nonlinear problems.
arXiv Detail & Related papers (2024-08-12T23:10:39Z) - Transformers Implement Functional Gradient Descent to Learn Non-Linear Functions In Context [44.949726166566236]
We show that (non-linear) Transformers naturally learn to implement gradient descent in function space.
We also show that the optimal choice of non-linear activation depends in a natural way on the class of functions that need to be learned.
arXiv Detail & Related papers (2023-12-11T17:05:25Z) - Digging Deeper: Operator Analysis for Optimizing Nonlinearity of Boolean
Functions [8.382710169577447]
We investigate the effects of genetic operators for bit-string encoding in optimizing nonlinearity.
By observing the range of possible changes an operator can provide, one can use this information to design a more effective combination of genetic operators.
arXiv Detail & Related papers (2023-02-12T10:34:01Z) - Offline Reinforcement Learning with Differentiable Function
Approximation is Provably Efficient [65.08966446962845]
offline reinforcement learning, which aims at optimizing decision-making strategies with historical data, has been extensively applied in real-life applications.
We take a step by considering offline reinforcement learning with differentiable function class approximation (DFA)
Most importantly, we show offline differentiable function approximation is provably efficient by analyzing the pessimistic fitted Q-learning algorithm.
arXiv Detail & Related papers (2022-10-03T07:59:42Z) - Provable General Function Class Representation Learning in Multitask
Bandits and MDPs [58.624124220900306]
multitask representation learning is a popular approach in reinforcement learning to boost the sample efficiency.
In this work, we extend the analysis to general function class representations.
We theoretically validate the benefit of multitask representation learning within general function class for bandits and linear MDP.
arXiv Detail & Related papers (2022-05-31T11:36:42Z) - A Boosting Approach to Reinforcement Learning [59.46285581748018]
We study efficient algorithms for reinforcement learning in decision processes whose complexity is independent of the number of states.
We give an efficient algorithm that is capable of improving the accuracy of such weak learning methods.
arXiv Detail & Related papers (2021-08-22T16:00:45Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - Invariant Feature Coding using Tensor Product Representation [75.62232699377877]
We prove that the group-invariant feature vector contains sufficient discriminative information when learning a linear classifier.
A novel feature model that explicitly consider group action is proposed for principal component analysis and k-means clustering.
arXiv Detail & Related papers (2019-06-05T07:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.