Measuring Inconsistency in Declarative Process Specifications
- URL: http://arxiv.org/abs/2206.07080v1
- Date: Tue, 14 Jun 2022 18:08:49 GMT
- Title: Measuring Inconsistency in Declarative Process Specifications
- Authors: Carl Corea, John Grant, Matthias Thimm
- Abstract summary: existing inconsistency measures for classical logic cannot provide a meaningful assessment of inconsistency in general.
We propose a novel paraconsistent semantics as a framework for inconsistency measurement.
We show how these measures can be applied to declarative process models and investigate the computational complexity of the introduced approach.
- Score: 8.048412266667176
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address the problem of measuring inconsistency in declarative process
specifications, with an emphasis on linear temporal logic on fixed traces
(LTLff). As we will show, existing inconsistency measures for classical logic
cannot provide a meaningful assessment of inconsistency in LTL in general, as
they cannot adequately handle the temporal operators. We therefore propose a
novel paraconsistent semantics as a framework for inconsistency measurement. We
then present two new inconsistency measures based on these semantics and show
that they satisfy important desirable properties. We show how these measures
can be applied to declarative process models and investigate the computational
complexity of the introduced approach.
Related papers
- On Evaluating Performance of LLM Inference Serving Systems [11.712948114304925]
We identify recurring anti-patterns across three key dimensions: Baseline Fairness, Evaluation setup, and Metric Design.<n>These anti-patterns are uniquely problematic for Large Language Model (LLM) inference due to its dual-phase nature.<n>We provide a comprehensive checklist derived from our analysis, establishing a framework for recognizing and avoiding these anti-patterns.
arXiv Detail & Related papers (2025-07-11T20:58:21Z) - Exploring LLM Reasoning Through Controlled Prompt Variations [0.9217021281095907]
We evaluate how well state-of-the-art models maintain logical consistency and correctness when confronted with four categories of prompt perturbations.
Our experiments, conducted on thirteen open-source and closed-source LLMs, reveal that introducing irrelevant context within the model's context window significantly degrades performance.
Certain perturbations inadvertently trigger chain-of-thought-like reasoning behaviors, even without explicit prompting.
arXiv Detail & Related papers (2025-04-02T20:18:50Z) - Introducing Verification Task of Set Consistency with Set-Consistency Energy Networks [4.545178162750511]
We introduce the task of set-consistency verification, an extension of natural language inference (NLI)
We present the Set-Consistency Energy Network (SC-Energy), a novel model that employs a contrastive loss framework to learn the compatibility among a collection of statements.
Our approach not only efficiently verifies inconsistencies and pinpoints the specific statements responsible for logical contradictions, but also significantly outperforms existing methods.
arXiv Detail & Related papers (2025-03-12T05:11:11Z) - Counterfactual-Consistency Prompting for Relative Temporal Understanding in Large Language Models [24.586475741345616]
We tackle the issue of temporal inconsistency in large language models (LLMs) by proposing a novel counterfactual prompting approach.
Our method generates counterfactual questions and enforces collective constraints, enhancing the model's consistency.
We evaluate our method on multiple datasets, demonstrating significant improvements in event ordering for explicit and implicit events and temporal commonsense understanding.
arXiv Detail & Related papers (2025-02-17T04:37:07Z) - Improving Uncertainty Quantification in Large Language Models via Semantic Embeddings [11.33157177182775]
Accurately quantifying uncertainty in large language models (LLMs) is crucial for their reliable deployment.
Current state-of-the-art methods for measuring semantic uncertainty in LLMs rely on strict bidirectional entailment criteria.
We propose a novel approach that leverages semantic embeddings to achieve smoother and more robust estimation of semantic uncertainty.
arXiv Detail & Related papers (2024-10-30T04:41:46Z) - Localizing Factual Inconsistencies in Attributable Text Generation [91.981439746404]
We introduce QASemConsistency, a new formalism for localizing factual inconsistencies in attributable text generation.
We first demonstrate the effectiveness of the QASemConsistency methodology for human annotation.
We then implement several methods for automatically detecting localized factual inconsistencies.
arXiv Detail & Related papers (2024-10-09T22:53:48Z) - Noncontextuality inequalities for prepare-transform-measure scenarios [0.0]
We show how linear quantifier elimination can be used to compute a polytope of correlations consistent with generalized noncontextuality.
We also give a simple algorithm for computing all the linear operational identities holding among a given set of states, of transformations, or of measurements.
arXiv Detail & Related papers (2024-07-12T18:20:41Z) - Uncertainty Quantification for In-Context Learning of Large Language Models [52.891205009620364]
In-context learning has emerged as a groundbreaking ability of Large Language Models (LLMs)
We propose a novel formulation and corresponding estimation method to quantify both types of uncertainties.
The proposed method offers an unsupervised way to understand the prediction of in-context learning in a plug-and-play fashion.
arXiv Detail & Related papers (2024-02-15T18:46:24Z) - Object-Centric Conformance Alignments with Synchronization (Extended Version) [57.76661079749309]
We present a new formalism that combines the ability of object-centric Petri nets to capture one-to-many relations and the one of Petri nets with identifiers to compare and synchronize objects based on their identity.
We propose a conformance checking approach for such nets based on an encoding in satisfiability modulo theories (SMT)
arXiv Detail & Related papers (2023-12-13T21:53:32Z) - Threshold-Consistent Margin Loss for Open-World Deep Metric Learning [42.03620337000911]
Existing losses used in deep metric learning (DML) for image retrieval often lead to highly non-uniform intra-class and inter-class representation structures.
Inconsistency often complicates the threshold selection process when deploying commercial image retrieval systems.
We propose a novel variance-based metric called Operating-Point-Inconsistency-Score (OPIS) that quantifies the variance in the operating characteristics across classes.
arXiv Detail & Related papers (2023-07-08T21:16:41Z) - Interpretable Automatic Fine-grained Inconsistency Detection in Text
Summarization [56.94741578760294]
We propose the task of fine-grained inconsistency detection, the goal of which is to predict the fine-grained types of factual errors in a summary.
Motivated by how humans inspect factual inconsistency in summaries, we propose an interpretable fine-grained inconsistency detection model, FineGrainFact.
arXiv Detail & Related papers (2023-05-23T22:11:47Z) - Conformance Checking with Uncertainty via SMT (Extended Version) [66.58864135810981]
We show how to solve the problem of checking conformance of uncertain logs against data-aware reference processes.
Our approach is modular, in that it homogeneously accommodates for different types of uncertainty.
We show the correctness of our approach and witness feasibility through a proof-of-concept implementation.
arXiv Detail & Related papers (2022-06-15T11:39:45Z) - The Language Model Understood the Prompt was Ambiguous: Probing
Syntactic Uncertainty Through Generation [23.711953448400514]
We inspect to which extent neural language models (LMs) exhibit uncertainty over such analyses.
We find that LMs can track multiple analyses simultaneously.
As a response to disambiguating cues, the LMs often select the correct interpretation, but occasional errors point to potential areas of improvement.
arXiv Detail & Related papers (2021-09-16T10:27:05Z) - Modeling Inter-Aspect Dependencies with a Non-temporal Mechanism for
Aspect-Based Sentiment Analysis [70.22725610210811]
We propose a novel non-temporal mechanism to enhance the ABSA task through modeling inter-aspect dependencies.
We focus on the well-known class imbalance issue on the ABSA task and address it by down-weighting the loss assigned to well-classified instances.
arXiv Detail & Related papers (2020-08-12T08:50:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.