Towards fuzzification of adaptation rules in self-adaptive architectures
- URL: http://arxiv.org/abs/2112.09468v1
- Date: Fri, 17 Dec 2021 12:17:16 GMT
- Title: Towards fuzzification of adaptation rules in self-adaptive architectures
- Authors: Tom\'a\v{s} Bure\v{s}, Petr Hn\v{e}tynka, Martin Kruli\v{s}, Danylo
Khalyeyev, Sebastian Hahner, Stephan Seifermann, Maximilian Walter, Robert
Heinrich
- Abstract summary: We focus on exploiting neural networks for the analysis and planning stage in self-adaptive architectures.
One simple option to address such a need is to replace the reasoning based on logical rules with a neural network.
We show how to navigate in this continuum and create a neural network architecture that naturally embeds the original logical rules.
- Score: 2.730650695194413
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we focus on exploiting neural networks for the analysis and
planning stage in self-adaptive architectures. The studied motivating cases in
the paper involve existing (legacy) self-adaptive architectures and their
adaptation logic, which has been specified by logical rules. We further assume
that there is a need to endow these systems with the ability to learn based on
examples of inputs and expected outputs. One simple option to address such a
need is to replace the reasoning based on logical rules with a neural network.
However, this step brings several problems that often create at least a
temporary regress. The reason is the logical rules typically represent a large
and tested body of domain knowledge, which may be lost if the logical rules are
replaced by a neural network. Further, the black-box nature of generic neural
networks obfuscates how the systems work inside and consequently introduces
more uncertainty. In this paper, we present a method that makes it possible to
endow an existing self-adaptive architectures with the ability to learn using
neural networks, while preserving domain knowledge existing in the logical
rules. We introduce a continuum between the existing rule-based system and a
system based on a generic neural network. We show how to navigate in this
continuum and create a neural network architecture that naturally embeds the
original logical rules and how to gradually scale the learning potential of the
network, thus controlling the uncertainty inherent to all soft computing
models. We showcase and evaluate the approach on representative excerpts from
two larger real-life use cases.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Reasoning Algorithmically in Graph Neural Networks [1.8130068086063336]
We aim to integrate the structured and rule-based reasoning of algorithms with adaptive learning capabilities of neural networks.
This dissertation provides theoretical and practical contributions to this area of research.
arXiv Detail & Related papers (2024-02-21T12:16:51Z) - Building artificial neural circuits for domain-general cognition: a
primer on brain-inspired systems-level architecture [0.0]
We provide an overview of the hallmarks endowing biological neural networks with the functionality needed for flexible cognition.
As machine learning models become more complex, these principles may provide valuable directions in an otherwise vast space of possible architectures.
arXiv Detail & Related papers (2023-03-21T18:36:17Z) - Extensions to Generalized Annotated Logic and an Equivalent Neural
Architecture [4.855957436171202]
We propose a list of desirable criteria for neuro symbolic systems and examine how some of the existing approaches address these criteria.
We then propose an extension to annotated generalized logic that allows for the creation of an equivalent neural architecture.
Unlike previous approaches that rely on continuous optimization for the training process, our framework is designed as a binarized neural network that uses discrete optimization.
arXiv Detail & Related papers (2023-02-23T17:39:46Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Knowledge-based Analogical Reasoning in Neuro-symbolic Latent Spaces [20.260546238369205]
We propose a framework that combines the pattern recognition abilities of neural networks with symbolic reasoning and background knowledge.
We take inspiration from the 'neural algorithmic reasoning' approach [DeepMind 2020] and use problem-specific background knowledge.
We test this on visual analogy problems in RAVENs Progressive Matrices, and achieve accuracy competitive with human performance.
arXiv Detail & Related papers (2022-09-19T04:03:20Z) - The mathematics of adversarial attacks in AI -- Why deep learning is
unstable despite the existence of stable neural networks [69.33657875725747]
We prove that any training procedure based on training neural networks for classification problems with a fixed architecture will yield neural networks that are either inaccurate or unstable (if accurate)
The key is that the stable and accurate neural networks must have variable dimensions depending on the input, in particular, variable dimensions is a necessary condition for stability.
Our result points towards the paradox that accurate and stable neural networks exist, however, modern algorithms do not compute them.
arXiv Detail & Related papers (2021-09-13T16:19:25Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z) - A neural network model of perception and reasoning [0.0]
We show that a simple set of biologically consistent organizing principles confer these capabilities to neuronal networks.
We implement these principles in a novel machine learning algorithm, based on concept construction instead of optimization, to design deep neural networks that reason with explainable neuron activity.
arXiv Detail & Related papers (2020-02-26T06:26:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.