A Search for Nonlinear Balanced Boolean Functions by Leveraging
Phenotypic Properties
- URL: http://arxiv.org/abs/2306.09190v1
- Date: Thu, 15 Jun 2023 15:16:19 GMT
- Title: A Search for Nonlinear Balanced Boolean Functions by Leveraging
Phenotypic Properties
- Authors: Bruno Ga\v{s}perov, Marko {\DJ}urasevi\'c, Domagoj Jakobovi\'c
- Abstract summary: We consider the problem of finding perfectly balanced Boolean functions with high non-linearity values.
Such functions have extensive applications in domains such as cryptography and error-correcting coding theory.
We provide an approach for finding such functions by a local search method that exploits the structure of the underlying problem.
- Score: 3.265773263570237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider the problem of finding perfectly balanced Boolean
functions with high non-linearity values. Such functions have extensive
applications in domains such as cryptography and error-correcting coding
theory. We provide an approach for finding such functions by a local search
method that exploits the structure of the underlying problem. Previous attempts
in this vein typically focused on using the properties of the fitness landscape
to guide the search. We opt for a different path in which we leverage the
phenotype landscape (the mapping from genotypes to phenotypes) instead. In the
context of the underlying problem, the phenotypes are represented by
Walsh-Hadamard spectra of the candidate solutions (Boolean functions). We
propose a novel selection criterion, under which the phenotypes are compared
directly, and test whether its use increases the convergence speed (measured by
the number of required spectra calculations) when compared to a competitive
fitness function used in the literature. The results reveal promising
convergence speed improvements for Boolean functions of sizes $N=6$ to $N=9$.
Related papers
- Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Digging Deeper: Operator Analysis for Optimizing Nonlinearity of Boolean
Functions [8.382710169577447]
We investigate the effects of genetic operators for bit-string encoding in optimizing nonlinearity.
By observing the range of possible changes an operator can provide, one can use this information to design a more effective combination of genetic operators.
arXiv Detail & Related papers (2023-02-12T10:34:01Z) - Graph Neural Networks with Adaptive Readouts [5.575293536755126]
We show the effectiveness of neural readouts on more than 40 datasets spanning different domains and graph characteristics.
We observe a consistent improvement over standard readouts relative to the number of neighborhood aggregation and different convolutional operators.
arXiv Detail & Related papers (2022-11-09T15:21:09Z) - Evolutionary Construction of Perfectly Balanced Boolean Functions [7.673465837624365]
We investigate the use of Genetic Programming (GP) and Genetic Algorithms (GA) to construct Boolean functions that satisfy a property, perfect balancedness, along with a good nonlinearity profile.
Surprisingly, the results show that GA with the weightwise balanced representation outperforms GP with the classical truth table phenotype in finding highly nonlinear WPB functions.
arXiv Detail & Related papers (2022-02-16T18:03:04Z) - Variational Physics Informed Neural Networks: the role of quadratures
and test functions [0.0]
We analyze how Gaussian or Newton-Cotes quadrature rules of different precisions and piecewise test functions of different degrees affect the convergence rate of Variational Physics Informed Neural Networks (VPINN)
Using a Petrov-Galerkin framework relying on an inf-sup condition, we derive an a priori error estimate in the energy norm between the exact solution and a suitable high-order piecewise interpolant of a computed neural network.
arXiv Detail & Related papers (2021-09-05T10:06:35Z) - Feature Cross Search via Submodular Optimization [58.15569071608769]
We study feature cross search as a fundamental primitive in feature engineering.
We show that there exists a simple greedy $(1-1/e)$-approximation algorithm for this problem.
arXiv Detail & Related papers (2021-07-05T16:58:31Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Hardness of Random Optimization Problems for Boolean Circuits,
Low-Degree Polynomials, and Langevin Dynamics [78.46689176407936]
We show that families of algorithms fail to produce nearly optimal solutions with high probability.
For the case of Boolean circuits, our results improve the state-of-the-art bounds known in circuit complexity theory.
arXiv Detail & Related papers (2020-04-25T05:45:59Z) - Implicit differentiation of Lasso-type models for hyperparameter
optimization [82.73138686390514]
We introduce an efficient implicit differentiation algorithm, without matrix inversion, tailored for Lasso-type problems.
Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.
arXiv Detail & Related papers (2020-02-20T18:43:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.