Antifragility Predicts the Robustness and Evolvability of Biological
Networks through Multi-class Classification with a Convolutional Neural
Network
- URL: http://arxiv.org/abs/2002.01571v2
- Date: Wed, 2 Sep 2020 10:33:00 GMT
- Title: Antifragility Predicts the Robustness and Evolvability of Biological
Networks through Multi-class Classification with a Convolutional Neural
Network
- Authors: Hyobin Kim, Stalin Mu\~noz, Pamela Osuna, and Carlos Gershenson
- Abstract summary: We develop a method to estimate the robustness and evolvability of biological networks without an explicit comparison of functions.
By means of the differences of antifragility between the original and mutated networks, we train a convolutional neural network (CNN) and test it to classify the properties of robustness and evolvability.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robustness and evolvability are essential properties to the evolution of
biological networks. To determine if a biological network is robust and/or
evolvable, it is required to compare its functions before and after mutations.
However, this sometimes takes a high computational cost as the network size
grows. Here we develop a predictive method to estimate the robustness and
evolvability of biological networks without an explicit comparison of
functions. We measure antifragility in Boolean network models of biological
systems and use this as the predictor. Antifragility occurs when a system
benefits from external perturbations. By means of the differences of
antifragility between the original and mutated biological networks, we train a
convolutional neural network (CNN) and test it to classify the properties of
robustness and evolvability. We found that our CNN model successfully
classified the properties. Thus, we conclude that our antifragility measure can
be used as a predictor of the robustness and evolvability of biological
networks.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - PhyloGFN: Phylogenetic inference with generative flow networks [57.104166650526416]
We introduce the framework of generative flow networks (GFlowNets) to tackle two core problems in phylogenetics: parsimony-based and phylogenetic inference.
Because GFlowNets are well-suited for sampling complex structures, they are a natural choice for exploring and sampling from the multimodal posterior distribution over tree topologies.
We demonstrate that our amortized posterior sampler, PhyloGFN, produces diverse and high-quality evolutionary hypotheses on real benchmark datasets.
arXiv Detail & Related papers (2023-10-12T23:46:08Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Certified Invertibility in Neural Networks via Mixed-Integer Programming [16.64960701212292]
Neural networks are known to be vulnerable to adversarial attacks.
There may exist large, meaningful perturbations that do not affect the network's decision.
We discuss how our findings can be useful for invertibility certification in transformations between neural networks.
arXiv Detail & Related papers (2023-01-27T15:40:38Z) - Impact of spiking neurons leakages and network recurrences on
event-based spatio-temporal pattern recognition [0.0]
Spiking neural networks coupled with neuromorphic hardware and event-based sensors are getting increased interest for low-latency and low-power inference at the edge.
We explore the impact of synaptic and membrane leakages in spiking neurons.
arXiv Detail & Related papers (2022-11-14T21:34:02Z) - Understanding Adversarial Robustness from Feature Maps of Convolutional
Layers [23.42376264664302]
Anti-perturbation ability of a neural network mainly relies on two factors: model capacity and anti-perturbation ability.
We study the anti-perturbation ability of the network from the feature maps of convolutional layers.
Non-trivial improvements in terms of both natural accuracy and adversarial robustness can be achieved under various attack and defense mechanisms.
arXiv Detail & Related papers (2022-02-25T00:14:59Z) - The mathematics of adversarial attacks in AI -- Why deep learning is
unstable despite the existence of stable neural networks [69.33657875725747]
We prove that any training procedure based on training neural networks for classification problems with a fixed architecture will yield neural networks that are either inaccurate or unstable (if accurate)
The key is that the stable and accurate neural networks must have variable dimensions depending on the input, in particular, variable dimensions is a necessary condition for stability.
Our result points towards the paradox that accurate and stable neural networks exist, however, modern algorithms do not compute them.
arXiv Detail & Related papers (2021-09-13T16:19:25Z) - Immuno-mimetic Deep Neural Networks (Immuno-Net) [15.653578249331982]
We introduce a new type of biomimetic model, one that borrows concepts from the immune system.
This immuno-mimetic model leads to a new computational biology framework for robustification of deep neural networks.
We show that Immuno-net RAILS results in improvement of as much as 12.5% in adversarial accuracy of a baseline method.
arXiv Detail & Related papers (2021-06-27T16:45:23Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - Neural Networks with Recurrent Generative Feedback [61.90658210112138]
We instantiate this design on convolutional neural networks (CNNs)
In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
arXiv Detail & Related papers (2020-07-17T19:32:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.