Compliance checking in reified IO logic via SHACL
- URL: http://arxiv.org/abs/2110.07033v1
- Date: Wed, 13 Oct 2021 21:09:47 GMT
- Title: Compliance checking in reified IO logic via SHACL
- Authors: Livio Robaldo and Kolawole J. Adebayo
- Abstract summary: Reified Input/Output (I/O) logic[21] has been recently proposed to model real-world norms in terms of the logic in [11]
This paper presents a methodology to carry out compliance checking on reified I/O logic formulae.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reified Input/Output (I/O) logic[21] has been recently proposed to model
real-world norms in terms of the logic in [11]. This is massively grounded on
the notion of reification, and it has specifically designed to model meaning of
natural language sentences, such as the ones occurring in existing legislation.
This paper presents a methodology to carry out compliance checking on reified
I/O logic formulae. These are translated in SHACL (Shapes Constraint Language)
shapes, a recent W3C recommendation to validate and reason with RDF
triplestores. Compliance checking is then enforced by validating RDF graphs
describing states of affairs with respect to these SHACL shapes.
Related papers
- FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving [53.43068330741449]
We propose FVEL, an interactive formal verification environment with large language models (LLMs)
FVEL transforms a given code to be verified into Isabelle, and then conducts verification via neural automated theorem proving with an LLM.
The FVELER dataset includes code dependencies and verification processes that are formulated in Isabelle, containing 758 theories, 29,125 lemmas, and 200,646 proof steps in total.
arXiv Detail & Related papers (2024-06-20T15:31:05Z) - SHACL2FOL: An FOL Toolkit for SHACL Decision Problems [0.4895118383237099]
We introduce SHACL2FOL, the first automatic tool that translates SHACL documents into FOL sentences.
The tool computes the answer to the two static analysis problems of satisfiability and containment.
It also allow to test the validity of a graph with respect to a set of constraints.
arXiv Detail & Related papers (2024-06-12T09:20:25Z) - Log Probabilities Are a Reliable Estimate of Semantic Plausibility in Base and Instruction-Tuned Language Models [50.15455336684986]
We evaluate the effectiveness of LogProbs and basic prompting to measure semantic plausibility.
We find that LogProbs offers a more reliable measure of semantic plausibility than direct zero-shot prompting.
We conclude that, even in the era of prompt-based evaluations, LogProbs constitute a useful metric of semantic plausibility.
arXiv Detail & Related papers (2024-03-21T22:08:44Z) - A Chain-of-Thought Is as Strong as Its Weakest Link: A Benchmark for Verifiers of Reasoning Chains [33.46649770312231]
Prompting language models to provide step-by-step answers is a prominent approach for complex reasoning tasks.
No fine-grained step-level datasets are available to enable thorough evaluation of such verification methods.
We introduce REVEAL: Reasoning Verification Evaluation, a dataset to benchmark automatic verifiers of complex Chain-of-Thought reasoning.
arXiv Detail & Related papers (2024-02-01T12:46:45Z) - Factcheck-Bench: Fine-Grained Evaluation Benchmark for Automatic Fact-checkers [121.53749383203792]
We present a holistic end-to-end solution for annotating the factuality of large language models (LLMs)-generated responses.
We construct an open-domain document-level factuality benchmark in three-level granularity: claim, sentence and document.
Preliminary experiments show that FacTool, FactScore and Perplexity are struggling to identify false claims.
arXiv Detail & Related papers (2023-11-15T14:41:57Z) - Semantic Role Labeling Meets Definition Modeling: Using Natural Language
to Describe Predicate-Argument Structures [104.32063681736349]
We present an approach to describe predicate-argument structures using natural language definitions instead of discrete labels.
Our experiments and analyses on PropBank-style and FrameNet-style, dependency-based and span-based SRL also demonstrate that a flexible model with an interpretable output does not necessarily come at the expense of performance.
arXiv Detail & Related papers (2022-12-02T11:19:16Z) - RobustLR: Evaluating Robustness to Logical Perturbation in Deductive
Reasoning [25.319674132967553]
Transformers have been shown to be able to perform deductive reasoning on a logical rulebase containing rules and statements written in English natural language.
We propose RobustLR to evaluate the robustness of these models to minimal logical edits in rulebases.
We find that the models trained in prior works do not perform consistently on the different perturbations in RobustLR.
arXiv Detail & Related papers (2022-05-25T09:23:50Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Stateless and Rule-Based Verification For Compliance Checking
Applications [1.7403133838762452]
We present a formal logic-based framework for creating intelligent compliance checking systems.
SARV is a verification framework designed to simplify the overall process of verification for stateless and rule-based verification problems.
Based on 300 data experiments, the SARV-based compliance solution outperforms machine learning methods on a 3125-records software quality dataset.
arXiv Detail & Related papers (2022-04-14T17:31:33Z) - A Review of SHACL: From Data Validation to Schema Reasoning for RDF
Graphs [3.274290296343038]
We present an introduction and a review of the Shapes Constraint Language (SHACL), the W3C recommendation language for validating RDF data.
A SHACL document describes a set of constraints on RDF nodes, and a graph is valid with respect to the document if its nodes satisfy these constraints.
arXiv Detail & Related papers (2021-12-02T17:28:45Z) - Logical Natural Language Generation from Open-Domain Tables [107.04385677577862]
We propose a new task where a model is tasked with generating natural language statements that can be emphlogically entailed by the facts.
To facilitate the study of the proposed logical NLG problem, we use the existing TabFact dataset citechen 2019tabfact featured with a wide range of logical/symbolic inferences.
The new task poses challenges to the existing monotonic generation frameworks due to the mismatch between sequence order and logical order.
arXiv Detail & Related papers (2020-04-22T06:03:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.