Rule-based Graph Repair using Minimally Restricted Consistency-Improving
Transformations
- URL: http://arxiv.org/abs/2307.09150v1
- Date: Tue, 18 Jul 2023 11:20:54 GMT
- Title: Rule-based Graph Repair using Minimally Restricted Consistency-Improving
Transformations
- Authors: Alexander Lauer
- Abstract summary: We introduce new notions of consistency, which we call consistency-maintaining and consistency-increasing transformations and rules.
We present an rule-based graph repair approach that is able to repair so-called emphcircular conflict-free constraints.
- Score: 65.268245109828
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Model-driven software engineering is a suitable method for dealing with the
ever-increasing complexity of software development processes. Graphs and graph
transformations have proven useful for representing such models and changes to
them. These models must satisfy certain sets of constraints. An example are the
multiplicities of a class structure. During the development process, a change
to a model may result in an inconsistent model that must at some point be
repaired. This problem is called model repair. In particular, we will consider
rule-based graph repair which is defined as follows: Given a graph $G$, a
constraint $c$ such that $G$ does not satisfy $c$, and a set of rules $R$, use
the rules of $\mathcal{R}$ to transform $G$ into a graph that satisfies $c$.
Known notions of consistency have either viewed consistency as a binary
property, either a graph is consistent w.r.t. a constraint $c$ or not, or only
viewed the number of violations of the first graph of a constraint. In this
thesis, we introduce new notions of consistency, which we call
consistency-maintaining and consistency-increasing transformations and rules,
respectively. This is based on the possibility that a constraint can be
satisfied up to a certain nesting level.
We present constructions for direct consistency-maintaining or direct
consistency-increasing application conditions, respectively. Finally, we
present an rule-based graph repair approach that is able to repair so-called
\emph{circular conflict-free constraints}, and so-called circular conflict-free
sets of constraints. Intuitively, a set of constraint $C$ is circular conflict
free, if there is an ordering $c_1, \ldots, c_n$ of all constraints of $C$ such
that there is no $j <i$ such that a repair of $c_i$ at all graphs satisfying
$c_j$ leads to a graph not satisfying $c_j$.
Related papers
- Monge-Kantorovich Fitting With Sobolev Budgets [6.748324975906262]
We quantify the performance of the approximation with the Monge-Kantorovich $p$-cost.
We may then reform the problem as minimizing a functional $mathscrJ_p(f)$ under a constraint on the Sobolev budget.
arXiv Detail & Related papers (2024-09-25T01:30:16Z) - Joint Learning of Linear Dynamical Systems under Smoothness Constraints [5.2395896768723045]
We consider the problem of joint learning of multiple linear dynamical systems.
In particular, we show conditions under which the mean-squared error bounds on the mean-squared error (MSE) converges to zero as $m$ increases.
arXiv Detail & Related papers (2024-06-03T08:29:42Z) - Using application conditions to rank graph transformations for graph repair [42.040825683177175]
We present an approach to consistency as a graduated property.
This allows living with inconsistencies for a while and repairing them when necessary.
An initial evaluation shows that graph repair can be well supported by rules with these new types of application conditions.
arXiv Detail & Related papers (2024-05-14T17:37:01Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - Greedy Pruning with Group Lasso Provably Generalizes for Matrix Sensing [30.508036898655114]
Pruning schemes have been widely used in practice to reduce the complexity of trained models with a massive number of parameters.
Running gradient descent in the absence of regularization results in models which are not suitable for greedy pruning, i.e., many columns could have their $ell$ norm comparable to that of the maximum.
Our results provide the first rigorous insights on why greedy pruning + fine-tuning leads to smaller models which also generalize well.
arXiv Detail & Related papers (2023-03-20T21:05:44Z) - Causal Bandits for Linear Structural Equation Models [58.2875460517691]
This paper studies the problem of designing an optimal sequence of interventions in a causal graphical model.
It is assumed that the graph's structure is known and has $N$ nodes.
Two algorithms are proposed for the frequentist (UCB-based) and Bayesian settings.
arXiv Detail & Related papers (2022-08-26T16:21:31Z) - Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov
Random Fields [51.07460861448716]
This paper presents a convex-analytic framework to learn from data.
We show that a triangular convexity decomposition is guaranteed by a transform of the corresponding to its upper part.
arXiv Detail & Related papers (2021-09-17T17:46:12Z) - Intervention Efficient Algorithms for Approximate Learning of Causal
Graphs [22.401163479802094]
We study the problem of learning the causal relationships between a set of observed variables in the presence of latents.
Our goal is to recover the directions of all causal or ancestral relations in $G$, via a minimum cost set of interventions.
Our algorithms combine work on efficient intervention design and the design of low-cost separating set systems.
arXiv Detail & Related papers (2020-12-27T17:08:46Z) - Scalable Deep Generative Modeling for Sparse Graphs [105.60961114312686]
Existing deep neural methods require $Omega(n2)$ complexity by building up the adjacency matrix.
We develop a novel autoregressive model, named BiGG, that utilizes this sparsity to avoid generating the full adjacency matrix.
During training this autoregressive model can be parallelized with $O(log n)$ synchronization stages.
arXiv Detail & Related papers (2020-06-28T04:37:57Z) - Model-Based Reinforcement Learning with Value-Targeted Regression [48.92439657407732]
We focus on finite-horizon episodic RL where the transition model $P$ belongs to a known family of models $mathcalP$.
We derive a bound on the regret, which, in the special case of linear mixtures, the regret bound takes the form $tildemathcalO(dsqrtH3T)$.
arXiv Detail & Related papers (2020-06-01T17:47:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.