Deterministic versus stochastic dynamical classifiers: opposing random adversarial attacks with noise
- URL: http://arxiv.org/abs/2409.13470v1
- Date: Fri, 20 Sep 2024 12:59:16 GMT
- Title: Deterministic versus stochastic dynamical classifiers: opposing random adversarial attacks with noise
- Authors: Lorenzo Chicchi, Duccio Fanelli, Diego Febbe, Lorenzo Buffoni, Francesca Di Patti, Lorenzo Giambagli, Raffele Marino,
- Abstract summary: The Continuous-Variable Firing Rate (CVFR) model is used to describe the intertangled dynamics of excitatory biological neurons.
The model is supplied with a set of planted attractors which are self-consistently embedded in the inter-nodes coupling matrix.
A variant of the CVFR model is also studied and found to be robust to aversarial random attacks, which corrupt the items to be classified.
- Score: 0.8795040582681393
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Continuous-Variable Firing Rate (CVFR) model, widely used in neuroscience to describe the intertangled dynamics of excitatory biological neurons, is here trained and tested as a veritable dynamically assisted classifier. To this end the model is supplied with a set of planted attractors which are self-consistently embedded in the inter-nodes coupling matrix, via its spectral decomposition. Learning to classify amounts to sculp the basin of attraction of the imposed equilibria, directing different items towards the corresponding destination target, which reflects the class of respective pertinence. A stochastic variant of the CVFR model is also studied and found to be robust to aversarial random attacks, which corrupt the items to be classified. This remarkable finding is one of the very many surprising effects which arise when noise and dynamical attributes are made to mutually resonate.
Related papers
- Dynamic Post-Hoc Neural Ensemblers [55.15643209328513]
In this study, we explore employing neural networks as ensemble methods.
Motivated by the risk of learning low-diversity ensembles, we propose regularizing the model by randomly dropping base model predictions.
We demonstrate this approach lower bounds the diversity within the ensemble, reducing overfitting and improving generalization capabilities.
arXiv Detail & Related papers (2024-10-06T15:25:39Z) - Bias in Motion: Theoretical Insights into the Dynamics of Bias in SGD Training [7.5041863920639456]
Machine learning systems often acquire biases by leveraging undesired features in the data, impacting accuracy across different sub-populations.
This paper explores the evolution of bias in a teacher-student setup modeling different data sub-populations with a Gaussian-mixture model.
Applying our findings to fairness and robustness, we delineate how and when heterogeneous data and spurious features can generate and amplify bias.
arXiv Detail & Related papers (2024-05-28T15:50:10Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Stable Attractors for Neural networks classification via Ordinary Differential Equations (SA-nODE) [0.9786690381850358]
A priori is constructed to accommodate a set of pre-assigned stationary stable attractors.
The inherent ability to perform classification is reflected in the shaped basin of attractions associated to each of the target stable attractors.
Although this method does not reach the performance of state-of-the-art deep learning algorithms, it illustrates that continuous dynamical systems with closed analytical interaction terms can serve as high-performance classifiers.
arXiv Detail & Related papers (2023-11-17T08:30:41Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Locality-aware Attention Network with Discriminative Dynamics Learning
for Weakly Supervised Anomaly Detection [0.8883733362171035]
We propose a Discriminative Dynamics Learning (DDL) method with two objective functions, i.e., dynamics ranking loss and dynamics alignment loss.
A Locality-aware Attention Network (LA-Net) is constructed to capture global correlations and re-calibrate the location preference across snippets, followed by a multilayer perceptron with causal convolution to obtain anomaly scores.
arXiv Detail & Related papers (2022-08-11T04:27:33Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections [73.95786440318369]
We focus on the so-called implicit effect' of GNIs, which is the effect of the injected noise on the dynamics of gradient descent (SGD)
We show that this effect induces an asymmetric heavy-tailed noise on gradient updates.
We then formally prove that GNIs induce an implicit bias', which varies depending on the heaviness of the tails and the level of asymmetry.
arXiv Detail & Related papers (2021-02-13T21:28:09Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - The Role of Isomorphism Classes in Multi-Relational Datasets [6.419762264544509]
We show that isomorphism leakage overestimates performance in multi-relational inference.
We propose isomorphism-aware synthetic benchmarks for model evaluation.
We also demonstrate that isomorphism classes can be utilised through a simple prioritisation scheme.
arXiv Detail & Related papers (2020-09-30T12:15:24Z) - Dynamical hysteresis properties of the driven-dissipative Bose-Hubbard
model with a Gutzwiller Monte Carlo approach [0.0]
We study the dynamical properties of a driven-dissipative Bose-Hubbard model in the strongly interacting regime.
We take classical and quantum correlations into account.
We show that approximation techniques relying on a unimodal distribution such as the mean field and $1/z$ expansion drastically underestimate the particle number fluctuations.
arXiv Detail & Related papers (2020-08-17T13:50:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.