Are Neural Networks Collision Resistant?
- URL: http://arxiv.org/abs/2509.20262v1
- Date: Wed, 24 Sep 2025 15:50:41 GMT
- Title: Are Neural Networks Collision Resistant?
- Authors: Marco Benedetti, Andrej Bogdanov, Enrico M. Malatesta, Marc Mézard, Gianmarco Perrupato, Alon Rosen, Nikolaj I. Schwartzbach, Riccardo Zecchina,
- Abstract summary: We study the algorithmic complexity of finding a collision in a single-layer neural net.<n>For binary perceptrons with oscillating activation functions, we establish the emergence of an overlap gap property.
- Score: 3.2522957478427283
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When neural networks are trained to classify a dataset, one finds a set of weights from which the network produces a label for each data point. We study the algorithmic complexity of finding a collision in a single-layer neural net, where a collision is defined as two distinct sets of weights that assign the same labels to all data. For binary perceptrons with oscillating activation functions, we establish the emergence of an overlap gap property in the space of collisions. This is a topological property believed to be a barrier to the performance of efficient algorithms. The hardness is supported by numerical experiments using approximate message passing algorithms, for which the algorithms stop working well below the value predicted by our analysis. Neural networks provide a new category of candidate collision resistant functions, which for some parameter setting depart from constructions based on lattices. Beyond relevance to cryptography, our work uncovers new forms of computational hardness emerging in large neural networks which may be of independent interest.
Related papers
- Semantic Loss Functions for Neuro-Symbolic Structured Prediction [74.18322585177832]
We discuss the semantic loss, which injects knowledge about such structure, defined symbolically, into training.
It is agnostic to the arrangement of the symbols, and depends only on the semantics expressed thereby.
It can be combined with both discriminative and generative neural models.
arXiv Detail & Related papers (2024-05-12T22:18:25Z) - Memorization With Neural Nets: Going Beyond the Worst Case [5.03863830033243]
In practice, deep neural networks are often able to easily interpolate their training data.<n>We introduce a simple randomized algorithm that constructs an interpolating three-layer neural network in time.<n>We obtain guarantees that are independent of the number of samples and hence move beyond worst-case memorization capacity bounds.
arXiv Detail & Related papers (2023-09-30T10:06:05Z) - Multilayer Multiset Neuronal Networks -- MMNNs [55.2480439325792]
The present work describes multilayer multiset neuronal networks incorporating two or more layers of coincidence similarity neurons.
The work also explores the utilization of counter-prototype points, which are assigned to the image regions to be avoided.
arXiv Detail & Related papers (2023-08-28T12:55:13Z) - Provable Data Subset Selection For Efficient Neural Network Training [73.34254513162898]
We introduce the first algorithm to construct coresets for emphRBFNNs, i.e., small weighted subsets that approximate the loss of the input data on any radial basis function network.
We then perform empirical evaluations on function approximation and dataset subset selection on popular network architectures and data sets.
arXiv Detail & Related papers (2023-03-09T10:08:34Z) - Graph Neural Network-based Early Bearing Fault Detection [0.18275108630751835]
A novel graph neural network-based fault detection method is proposed.
It builds a bridge between AI and real-world running mechanical systems.
We find that the proposed method can successfully detect faulty objects that are mixed in the normal object region.
arXiv Detail & Related papers (2022-04-24T08:54:55Z) - Dive into Layers: Neural Network Capacity Bounding using Algebraic
Geometry [55.57953219617467]
We show that the learnability of a neural network is directly related to its size.
We use Betti numbers to measure the topological geometric complexity of input data and the neural network.
We perform the experiments on a real-world dataset MNIST and the results verify our analysis and conclusion.
arXiv Detail & Related papers (2021-09-03T11:45:51Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Artificial Neural Networks generated by Low Discrepancy Sequences [59.51653996175648]
We generate artificial neural networks as random walks on a dense network graph.
Such networks can be trained sparse from scratch, avoiding the expensive procedure of training a dense network and compressing it afterwards.
We demonstrate that the artificial neural networks generated by low discrepancy sequences can achieve an accuracy within reach of their dense counterparts at a much lower computational complexity.
arXiv Detail & Related papers (2021-03-05T08:45:43Z) - Finding hidden-feature depending laws inside a data set and classifying
it using Neural Network [0.0]
The logcosh loss function for neural networks has been developed to combine the advantage of the absolute error loss function of not overweighting outliers with the advantage of the mean square error of continuous derivative near the mean.
This work suggests a method that uses artificial neural networks with logcosh loss to find the branches of set-valued mappings in parameter-outcome sample sets and classifies the samples according to those branches.
arXiv Detail & Related papers (2021-01-25T21:37:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.