A Manifesto for Applicable Formal Methods
- URL: http://arxiv.org/abs/2112.12758v2
- Date: Tue, 22 Aug 2023 09:38:02 GMT
- Title: A Manifesto for Applicable Formal Methods
- Authors: Mario Gleirscher and Jaco van de Pol and Jim Woodcock
- Abstract summary: Our hypothesis is that formal methods still seem not to be applicable enough or ready for their intended use.
This manifesto strives to foster an increased use of formal methods to the maximum benefit.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Formal methods were frequently shown to be effective and, perhaps because of
that, practitioners are interested in using them more often. Still, these
methods are far less applied than expected, particularly, in critical domains
where they are strongly recommended and where they have the greatest potential.
Our hypothesis is that formal methods still seem not to be applicable enough or
ready for their intended use. In critical software engineering, what do we mean
when we speak of a formal method? And what does it mean for such a method to be
applicable both from a scientific and practical viewpoint? Based on what the
literature tells about the first question, with this manifesto, we lay out a
set of principles that when followed by a formal method give rise to its mature
applicability in a given scope. Rather than exercising criticism of past
developments, this manifesto strives to foster an increased use of formal
methods to the maximum benefit.
Related papers
- Advocate for Complete Benchmarks for Formal Reasoning with Formal/Informal Statements and Formal/Informal Proofs [0.0]
We take the position that open code, open data, and benchmarks that are complete and error-free will accelerate progress in this field.<n>We identify practices that create barriers to contributing to this field and suggest ways to remove them.
arXiv Detail & Related papers (2025-07-07T07:27:45Z) - On The Expressivity of Objective-Specification Formalisms in
Reinforcement Learning [4.998202587873575]
We compare objective-specification formalisms in reinforcement learning.
No formalism is both dominantly expressive and straightforward to optimise with current techniques.
Results highlight the need for future research which adapts reward learning to work with a greater variety of formalisms.
arXiv Detail & Related papers (2023-10-18T09:46:01Z) - Understanding prompt engineering may not require rethinking
generalization [56.38207873589642]
We show that the discrete nature of prompts, combined with a PAC-Bayes prior given by a language model, results in generalization bounds that are remarkably tight by the standards of the literature.
This work provides a possible justification for the widespread practice of prompt engineering.
arXiv Detail & Related papers (2023-10-06T00:52:48Z) - Provable Benefits of Policy Learning from Human Preferences in
Contextual Bandit Problems [82.92678837778358]
preference-based methods have demonstrated substantial success in empirical applications such as InstructGPT.
We show how human bias and uncertainty in feedback modelings can affect the theoretical guarantees of these approaches.
arXiv Detail & Related papers (2023-07-24T17:50:24Z) - Ablation Path Saliency [0.0]
Several types of saliency methods have been proposed for explaining black-box classification.
We observe however that several of these methods can be seen as edge cases of a single, more general procedure.
We demonstrate furthermore that ablation paths can be directly used as a technique of its own right.
arXiv Detail & Related papers (2022-09-26T07:00:40Z) - Transductive Learning for Unsupervised Text Style Transfer [60.65782243927698]
Unsupervised style transfer models are mainly based on an inductive learning approach.
We propose a novel transductive learning approach based on a retrieval-based context-aware style representation.
arXiv Detail & Related papers (2021-09-16T08:57:20Z) - Individual Explanations in Machine Learning Models: A Case Study on
Poverty Estimation [63.18666008322476]
Machine learning methods are being increasingly applied in sensitive societal contexts.
The present case study has two main objectives. First, to expose these challenges and how they affect the use of relevant and novel explanations methods.
And second, to present a set of strategies that mitigate such challenges, as faced when implementing explanation methods in a relevant application domain.
arXiv Detail & Related papers (2021-04-09T01:54:58Z) - Explainable Machine Learning for Public Policy: Use Cases, Gaps, and
Research Directions [6.68777133358979]
We develop a taxonomy of explainability use-cases within public policy problems.
We define the end-users of explanations and the specific goals explainability has to fulfill.
We map existing work to these use-cases, identify gaps, and propose research directions to fill those gaps.
arXiv Detail & Related papers (2020-10-27T15:37:00Z) - Temporal-Logic-Based Reward Shaping for Continuing Learning Tasks [57.17673320237597]
In continuing tasks, average-reward reinforcement learning may be a more appropriate problem formulation than the more common discounted reward formulation.
This paper presents the first reward shaping framework for average-reward learning.
It proves that, under standard assumptions, the optimal policy under the original reward function can be recovered.
arXiv Detail & Related papers (2020-07-03T05:06:57Z) - Formal Methods: From Academia to Industrial Practice. A Travel Guide [46.04829474801786]
No real change in industrial software development seems to be occurring.
Gap between what formal methods can achieve and the daily software-development practice does not appear to be getting smaller.
arXiv Detail & Related papers (2020-02-17T22:09:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.