Look into the Mirror: Evolving Self-Dual Bent Boolean Functions
- URL: http://arxiv.org/abs/2311.11884v1
- Date: Mon, 20 Nov 2023 16:20:16 GMT
- Title: Look into the Mirror: Evolving Self-Dual Bent Boolean Functions
- Authors: Claude Carlet, Marko {\DH}urasevic, Domagoj Jakobovic, Luca Mariot,
Stjepan Picek
- Abstract summary: This paper experiments with evolutionary algorithms with the goal of evolving (anti-)self-dual bent Boolean functions.
We successfully construct self-dual bent functions for each dimension.
We also tried evolving secondary constructions for self-dual bent functions, but this direction provided no successful results.
- Score: 35.305121158674964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bent Boolean functions are important objects in cryptography and coding
theory, and there are several general approaches for constructing such
functions. Metaheuristics proved to be a strong choice as they can provide many
bent functions, even when the size of the Boolean function is large (e.g., more
than 20 inputs). While bent Boolean functions represent only a small part of
all Boolean functions, there are several subclasses of bent functions providing
specific properties and challenges. One of the most interesting subclasses
comprises (anti-)self-dual bent Boolean functions. This paper provides a
detailed experimentation with evolutionary algorithms with the goal of evolving
(anti-)self-dual bent Boolean functions. We experiment with two encodings and
two fitness functions to directly evolve self-dual bent Boolean functions. Our
experiments consider Boolean functions with sizes of up to 16 inputs, and we
successfully construct self-dual bent functions for each dimension. Moreover,
when comparing with the evolution of bent Boolean functions, we notice that the
difficulty for evolutionary algorithms is rather similar. Finally, we also
tried evolving secondary constructions for self-dual bent functions, but this
direction provided no successful results.
Related papers
- Adaptive Language-Guided Abstraction from Contrastive Explanations [53.48583372522492]
It is necessary to determine which features of the environment are relevant before determining how these features should be used to compute reward.
End-to-end methods for joint feature and reward learning often yield brittle reward functions that are sensitive to spurious state features.
This paper describes a method named ALGAE which alternates between using language models to iteratively identify human-meaningful features.
arXiv Detail & Related papers (2024-09-12T16:51:58Z) - Shift-invariant functions and almost liftings [0.0]
We investigate shift-invariant vectorial Boolean functions on $n$bits that are lifted from Boolean functions on $k$bits, for $kleq n$.
We show that if a Boolean function with diameter $k$ is an almost lifting, the maximum number of collisions of its lifted functions is $2k-1$ for any $n$.
We search for functions in the class of almost liftings that have good cryptographic properties and for which the non-bijectivity does not cause major security weaknesses.
arXiv Detail & Related papers (2024-07-16T17:23:27Z) - A Systematic Evaluation of Evolving Highly Nonlinear Boolean Functions
in Odd Sizes [35.305121158674964]
We consider the problem of evolving highly nonlinear Boolean functions in odd sizes.
Finding optimal solutions is impossible except for the smallest tested size.
We find a Boolean function in nine inputs with nonlinearity 241.
arXiv Detail & Related papers (2024-02-15T13:39:34Z) - A New Angle: On Evolving Rotation Symmetric Boolean Functions [32.90791284928444]
This paper uses several evolutionary algorithms to evolve rotation symmetric Boolean functions with different properties.
Surprisingly, we find bitstring and floating point encodings work better than the tree encoding.
arXiv Detail & Related papers (2023-11-20T16:16:45Z) - Emergent Modularity in Pre-trained Transformers [127.08792763817496]
We consider two main characteristics of modularity: functional specialization of neurons and function-based neuron grouping.
We study how modularity emerges during pre-training, and find that the modular structure is stabilized at the early stage.
It suggests that Transformers first construct the modular structure and then learn fine-grained neuron functions.
arXiv Detail & Related papers (2023-05-28T11:02:32Z) - Special functions in quantum phase estimation [61.12008553173672]
We focus on two special functions. One is prolate spheroidal wave function, which approximately gives the maximum probability that the difference between the true parameter and the estimate is smaller than a certain threshold.
The other is Mathieu function, which exactly gives the optimum estimation under the energy constraint.
arXiv Detail & Related papers (2023-02-14T08:33:24Z) - Evolving Constructions for Balanced, Highly Nonlinear Boolean Functions [37.84234862910533]
We show that genetic programming can evolve constructions resulting in balanced Boolean functions with high nonlinearity.
Our results show that GP can find constructions that generalize well, i.e., result in the required functions for multiple tested sizes.
Interestingly, the simplest solution found by GP is a particular case of the well-known indirect sum construction.
arXiv Detail & Related papers (2022-02-17T16:33:24Z) - Center Smoothing for Certifiably Robust Vector-Valued Functions [59.46976586742266]
We produce certifiable robustness for vector-valued functions bound to change in output caused by a small change in input.
We demonstrate the effectiveness of our method on multiple learning tasks involving vector-valued functions with a wide range of input and output dimensionalities.
arXiv Detail & Related papers (2021-02-19T01:34:48Z) - A quantum algorithm to estimate the Gowers $U_2$ norm and linearity
testing of Boolean functions [6.8072479152471566]
We propose a quantum algorithm to estimate the Gowers $U$ norm of a Boolean function.
We extend it into a second algorithm to distinguish between linear Boolean functions and Boolean functions that are $epsilon$-far from the set of linear Boolean functions.
arXiv Detail & Related papers (2020-06-30T04:39:20Z) - Space of Functions Computed by Deep-Layered Machines [74.13735716675987]
We study the space of functions computed by random-layered machines, including deep neural networks and Boolean circuits.
Investigating the distribution of Boolean functions computed on the recurrent and layer-dependent architectures, we find that it is the same in both models.
arXiv Detail & Related papers (2020-04-19T18:31:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.