A Generalised Framework for Property-Driven Machine Learning
- URL: http://arxiv.org/abs/2505.00466v1
- Date: Thu, 01 May 2025 11:33:38 GMT
- Title: A Generalised Framework for Property-Driven Machine Learning
- Authors: Thomas Flinkow, Marco Casadio, Colin Kessler, Rosemary Monahan, Ekaterina Komendantskaya,
- Abstract summary: We show how two complementary approaches can be unified within a single framework for property-driven machine learning.<n>We show that well-known properties from the literature are subcases of this general approach.<n>Our framework is publicly available at https://github.com/tflinkow/property-driven-ml.
- Score: 1.0485739694839669
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks have been shown to frequently fail to satisfy critical safety and correctness properties after training, highlighting the pressing need for training methods that incorporate such properties directly. While adversarial training can be used to improve robustness to small perturbations within $\epsilon$-cubes, domains other than computer vision -- such as control systems and natural language processing -- may require more flexible input region specifications via generalised hyper-rectangles. Meanwhile, differentiable logics offer a way to encode arbitrary logical constraints as additional loss terms that guide the learning process towards satisfying these constraints. In this paper, we investigate how these two complementary approaches can be unified within a single framework for property-driven machine learning. We show that well-known properties from the literature are subcases of this general approach, and we demonstrate its practical effectiveness on a case study involving a neural network controller for a drone system. Our framework is publicly available at https://github.com/tflinkow/property-driven-ml.
Related papers
- Gradient Routing: Masking Gradients to Localize Computation in Neural Networks [43.0686937643683]
We introduce gradient routing, a training method that isolates capabilities to specific subregions of a neural network.<n>We show that gradient routing can be used to learn representations which are partitioned in an interpretable way.<n>We conclude that the approach holds promise for challenging, real-world applications where quality data are scarce.
arXiv Detail & Related papers (2024-10-06T02:43:49Z) - A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Training Neural Networks with Internal State, Unconstrained
Connectivity, and Discrete Activations [66.53734987585244]
True intelligence may require the ability of a machine learning model to manage internal state.
We show that we have not yet discovered the most effective algorithms for training such models.
We present one attempt to design such a training algorithm, applied to an architecture with binary activations and only a single matrix of weights.
arXiv Detail & Related papers (2023-12-22T01:19:08Z) - Towards fuzzification of adaptation rules in self-adaptive architectures [2.730650695194413]
We focus on exploiting neural networks for the analysis and planning stage in self-adaptive architectures.
One simple option to address such a need is to replace the reasoning based on logical rules with a neural network.
We show how to navigate in this continuum and create a neural network architecture that naturally embeds the original logical rules.
arXiv Detail & Related papers (2021-12-17T12:17:16Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Multi-Agent Reinforcement Learning with Temporal Logic Specifications [65.79056365594654]
We study the problem of learning to satisfy temporal logic specifications with a group of agents in an unknown environment.
We develop the first multi-agent reinforcement learning technique for temporal logic specifications.
We provide correctness and convergence guarantees for our main algorithm.
arXiv Detail & Related papers (2021-02-01T01:13:03Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Learning for Integer-Constrained Optimization through Neural Networks
with Limited Training [28.588195947764188]
We introduce a symmetric and decomposed neural network structure, which is fully interpretable in terms of the functionality of its constituent components.
By taking advantage of the underlying pattern of the integer constraint, the introduced neural network offers superior generalization performance with limited training.
We show that the introduced decomposed approach can be further extended to semi-decomposed frameworks.
arXiv Detail & Related papers (2020-11-10T21:17:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.