A first-order logic characterization of safety and co-safety languages
- URL: http://arxiv.org/abs/2209.02307v5
- Date: Wed, 9 Aug 2023 07:59:56 GMT
- Title: A first-order logic characterization of safety and co-safety languages
- Authors: Alessandro Cimatti, Luca Geatti, Nicola Gigante, Angelo Montanari,
Stefano Tonetta
- Abstract summary: Safety and co-safety languages, where a finite prefix suffices to establish whether a word does not belong or belongs to a language, play a crucial role in lowering the complexity of problems like model checking and reactive synthesis.
This paper introduces a fragment of FO-TLO, called SafetyFO, and of its dual coSafety, which are expressively complete with respect to the safety and co-safety languages.
- Score: 63.29821624186913
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Linear Temporal Logic (LTL) is one of the most popular temporal logics, that
comes into play in a variety of branches of computer science. Among the various
reasons of its widespread use there are its strong foundational properties: LTL
is equivalent to counter-free omega-automata, to star-free omega-regular
expressions, and (by Kamp's theorem) to the First-Order Theory of Linear Orders
(FO-TLO). Safety and co-safety languages, where a finite prefix suffices to
establish whether a word does not belong or belongs to the language,
respectively, play a crucial role in lowering the complexity of problems like
model checking and reactive synthesis for LTL. SafetyLTL (resp., coSafetyLTL)
is a fragment of LTL where only universal (resp., existential) temporal
modalities are allowed, that recognises safety (resp., co-safety) languages
only. The main contribution of this paper is the introduction of a fragment of
FO-TLO, called SafetyFO, and of its dual coSafetyFO, which are expressively
complete with respect to the LTL-definable safety and co-safety languages. We
prove that they exactly characterize SafetyLTL and coSafetyLTL, respectively, a
result that joins Kamp's theorem, and provides a clearer view of the
characterization of (fragments of) LTL in terms of first-order languages. In
addition, it gives a direct, compact, and self-contained proof that any safety
language definable in LTL is definable in SafetyLTL as well. As a by-product,
we obtain some interesting results on the expressive power of the weak tomorrow
operator of SafetyLTL, interpreted over finite and infinite words. Moreover, we
prove that, when interpreted over finite words, SafetyLTL (resp. coSafetyLTL)
devoid of the tomorrow (resp., weak tomorrow) operator captures the safety
(resp., co-safety) fragment of LTL over finite words.
Related papers
- SafetyAnalyst: Interpretable, transparent, and steerable LLM safety moderation [56.10557932893919]
We present SafetyAnalyst, a novel LLM safety moderation framework.
Given a prompt, SafetyAnalyst creates a structured "harm-benefit tree"
It then aggregates this structured representation into a harmfulness score.
arXiv Detail & Related papers (2024-10-22T03:38:37Z) - CoT-TL: Low-Resource Temporal Knowledge Representation of Planning Instructions Using Chain-of-Thought Reasoning [0.0]
CoT-TL is a data-efficient in-context learning framework for translating natural language specifications into representations.
CoT-TL achieves state-of-the-art accuracy across three diverse datasets in low-data scenarios.
arXiv Detail & Related papers (2024-10-21T17:10:43Z) - Superficial Safety Alignment Hypothesis [8.297367440457508]
We propose the Superficial Safety Alignment Hypothesis (SSAH), which posits that safety alignment should teach an otherwise unsafe model to choose the correct reasoning direction.
We identify four types of attribute-critical components in safety-aligned large language models (LLMs)
Our findings show that freezing certain safety-critical components 7.5% during fine-tuning allows the model to retain its safety attributes while adapting to new tasks.
arXiv Detail & Related papers (2024-10-07T19:53:35Z) - A Framework for Real-time Safeguarding the Text Generation of Large Language Model [12.683042228674694]
Large Language Models (LLMs) have significantly advanced natural language processing (NLP) tasks.
They pose ethical and societal risks due to their propensity to generate harmful content.
We propose LLMSafeGuard, a lightweight framework to safeguard LLM text generation in real-time.
arXiv Detail & Related papers (2024-04-29T18:40:01Z) - CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion [117.178835165855]
This paper introduces CodeAttack, a framework that transforms natural language inputs into code inputs.
Our studies reveal a new and universal safety vulnerability of these models against code input.
We find that a larger distribution gap between CodeAttack and natural language leads to weaker safety generalization.
arXiv Detail & Related papers (2024-03-12T17:55:38Z) - Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs [95.07757789781213]
Two lines of approaches are adopted for complex reasoning with LLMs.
One line of work prompts LLMs with various reasoning structures, while the structural outputs can be naturally regarded as intermediate reasoning steps.
The other line of work adopt LLM-free declarative solvers to do the reasoning task, rendering higher reasoning accuracy but lacking interpretability due to the black-box nature of the solvers.
We present a simple extension to the latter line of work. Specifically, we showcase that the intermediate search logs generated by Prolog interpreters can be accessed and interpreted into human-readable reasoning.
arXiv Detail & Related papers (2023-11-16T11:26:21Z) - Latent Jailbreak: A Benchmark for Evaluating Text Safety and Output
Robustness of Large Language Models [28.37026309925163]
Large language models (LLMs) are designed to align with human values and generate safe text.
Previous benchmarks for jailbreaking LLMs have primarily focused on evaluating the safety of the models.
This paper assesses both the safety and robustness of LLMs, emphasizing the need for a balanced approach.
arXiv Detail & Related papers (2023-07-17T13:49:52Z) - Provable Adversarial Robustness for Fractional Lp Threat Models [136.79415677706612]
Attacks bounded by fractional L_p "norms" have yet to be thoroughly considered.
We propose a defense with several desirable properties.
It provides provable (certified) robustness, scales to ImageNet, and yields deterministic (rather than high-probability) certified guarantees.
arXiv Detail & Related papers (2022-03-16T21:11:41Z) - Trojaning Language Models for Fun and Profit [53.45727748224679]
TROJAN-LM is a new class of trojaning attacks in which maliciously crafted LMs trigger host NLP systems to malfunction.
By empirically studying three state-of-the-art LMs in a range of security-critical NLP tasks, we demonstrate that TROJAN-LM possesses the following properties.
arXiv Detail & Related papers (2020-08-01T18:22:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.