SHACL Validation under Graph Updates (Extended Paper)
- URL: http://arxiv.org/abs/2508.00137v1
- Date: Thu, 31 Jul 2025 19:58:16 GMT
- Title: SHACL Validation under Graph Updates (Extended Paper)
- Authors: Shqiponja Ahmetaj, George Konstantinidis, Magdalena Ortiz, Paolo Pareti, Mantas Simkus,
- Abstract summary: We present a SHACL-based update language that can capture intuitive and realistic modifications on RDF graphs.<n>This problem asks to verify whether every graph that validates a SHACL specification will still do so after applying a given update sequence.<n>We show that static validation can be reduced to (un)satisfiability of constraints in (a minor extension) SHACL.
- Score: 6.755812289103844
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: SHACL (SHApe Constraint Language) is a W3C standardized constraint language for RDF graphs. In this paper, we study SHACL validation in RDF graphs under updates. We present a SHACL-based update language that can capture intuitive and realistic modifications on RDF graphs and study the problem of static validation under such updates. This problem asks to verify whether every graph that validates a SHACL specification will still do so after applying a given update sequence. More importantly, it provides a basis for further services for reasoning about evolving RDF graphs. Using a regression technique that embeds the update actions into SHACL constraints, we show that static validation under updates can be reduced to (un)satisfiability of constraints in (a minor extension of) SHACL. We analyze the computational complexity of the static validation problem for SHACL and some key fragments. Finally, we present a prototype implementation that performs static validation and other static analysis tasks on SHACL constraints and demonstrate its behavior through preliminary experiments.
Related papers
- RePaCA: Leveraging Reasoning Large Language Models for Static Automated Patch Correctness Assessment [0.0]
We introduce RePaCA, a novel static APCA technique that leverages Large Language Models (LLMs) specialized in thinking tasks.<n>Our approach achieves state-of-the-art performance, with 83.1% accuracy and an 84.8% F1-score.
arXiv Detail & Related papers (2025-07-30T11:21:09Z) - xpSHACL: Explainable SHACL Validation using Retrieval-Augmented Generation and Large Language Models [0.0]
Shapes Constraint Language (SHACL) is a powerful language for validating RDF data.<n>This paper presents XPSHACL, an explainable SHACL validation system.<n>It combines rule-based justification trees with retrieval-augmented generation (RAG) and large language models (LLMs) to produce detailed, multilanguage explanations for constraint violations.
arXiv Detail & Related papers (2025-07-11T09:18:41Z) - Bayesian scaling laws for in-context learning [72.17734205418502]
In-context learning (ICL) is a powerful technique for getting language models to perform complex tasks with no training updates.
We show that ICL approximates a Bayesian learner and develop a family of novel Bayesian scaling laws for ICL.
arXiv Detail & Related papers (2024-10-21T21:45:22Z) - SHACL2FOL: An FOL Toolkit for SHACL Decision Problems [0.4895118383237099]
We introduce SHACL2FOL, the first automatic tool that translates SHACL documents into FOL sentences.
The tool computes the answer to the two static analysis problems of satisfiability and containment.
It also allow to test the validity of a graph with respect to a set of constraints.
arXiv Detail & Related papers (2024-06-12T09:20:25Z) - Do LVLMs Understand Charts? Analyzing and Correcting Factual Errors in Chart Captioning [90.13978453378768]
We introduce a comprehensive typology of factual errors in generated chart captions.
A large-scale human annotation effort provides insight into the error patterns and frequencies in captions crafted by various chart captioning models.
Our analysis reveals that even state-of-the-art models, including GPT-4V, frequently produce captions laced with factual inaccuracies.
arXiv Detail & Related papers (2023-12-15T19:16:21Z) - SaFormer: A Conditional Sequence Modeling Approach to Offline Safe
Reinforcement Learning [64.33956692265419]
offline safe RL is of great practical relevance for deploying agents in real-world applications.
We present a novel offline safe RL approach referred to as SaFormer.
arXiv Detail & Related papers (2023-01-28T13:57:01Z) - FIRE: A Failure-Adaptive Reinforcement Learning Framework for Edge Computing Migrations [52.85536740465277]
FIRE is a framework that adapts to rare events by training a RL policy in an edge computing digital twin environment.
We propose ImRE, an importance sampling-based Q-learning algorithm, which samples rare events proportionally to their impact on the value function.
We show that FIRE reduces costs compared to vanilla RL and the greedy baseline in the event of failures.
arXiv Detail & Related papers (2022-09-28T19:49:39Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - A Review of SHACL: From Data Validation to Schema Reasoning for RDF
Graphs [3.274290296343038]
We present an introduction and a review of the Shapes Constraint Language (SHACL), the W3C recommendation language for validating RDF data.
A SHACL document describes a set of constraints on RDF nodes, and a graph is valid with respect to the document if its nodes satisfy these constraints.
arXiv Detail & Related papers (2021-12-02T17:28:45Z) - Auditing AI models for Verified Deployment under Semantic Specifications [65.12401653917838]
AuditAI bridges the gap between interpretable formal verification and scalability.
We show how AuditAI allows us to obtain controlled variations for verification and certified training while addressing the limitations of verifying using only pixel-space perturbations.
arXiv Detail & Related papers (2021-09-25T22:53:24Z) - SHACL Satisfiability and Containment (Extended Paper) [6.308539010172308]
The Shapes Constraint Language (SHACL) is a recent W3C recommendation language for validating RDF data.
In this paper, we undertake a thorough study of different features of non-recursive SHACL by providing a translation to a new first-order language, called SCL.
We study the interaction of SHACL features in this logic and provide the detailed map of decidability and complexity results of the aforementioned decision problems for different SHACL sublanguages.
arXiv Detail & Related papers (2020-08-31T14:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.