Data-driven Numerical Invariant Synthesis with Automatic Generation of
Attributes
- URL: http://arxiv.org/abs/2205.14943v2
- Date: Tue, 31 May 2022 16:14:17 GMT
- Title: Data-driven Numerical Invariant Synthesis with Automatic Generation of
Attributes
- Authors: Ahmed Bouajjani and Wael-Amine Boutglay and Peter Habermehl
- Abstract summary: We propose a data-driven algorithm for numerical invariant synthesis and verification.
The algorithm is based on the ICE-DT schema for learning decision trees from samples of positive and negative states.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a data-driven algorithm for numerical invariant synthesis and
verification. The algorithm is based on the ICE-DT schema for learning decision
trees from samples of positive and negative states and implications
corresponding to program transitions. The main issue we address is the
discovery of relevant attributes to be used in the learning process of
numerical invariants. We define a method for solving this problem guided by the
data sample. It is based on the construction of a separator that covers
positive states and excludes negative ones, consistent with the implications.
The separator is constructed using an abstract domain representation of convex
sets. The generalization mechanism of the decision tree learning from the
constraints of the separator allows the inference of general invariants,
accurate enough for proving the targeted property. We implemented our algorithm
and showed its efficiency.
Related papers
- Bisimulation Learning [55.859538562698496]
We compute finite bisimulations of state transition systems with large, possibly infinite state space.
Our technique yields faster verification results than alternative state-of-the-art tools in practice.
arXiv Detail & Related papers (2024-05-24T17:11:27Z) - Gram-Schmidt Methods for Unsupervised Feature Extraction and Selection [7.373617024876725]
We propose a Gram-Schmidt process over function spaces to detect and map out nonlinear dependencies.
We provide experimental results for synthetic and real-world benchmark datasets.
Surprisingly, our linear feature extraction algorithms are comparable and often outperform several important nonlinear feature extraction methods.
arXiv Detail & Related papers (2023-11-15T21:29:57Z) - Domain Generalization via Rationale Invariance [70.32415695574555]
This paper offers a new perspective to ease the challenge of domain generalization, which involves maintaining robust results even in unseen environments.
We propose treating the element-wise contributions to the final results as the rationale for making a decision and representing the rationale for each sample as a matrix.
Our experiments demonstrate that the proposed approach achieves competitive results across various datasets, despite its simplicity.
arXiv Detail & Related papers (2023-08-22T03:31:40Z) - Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - Equivariance Discovery by Learned Parameter-Sharing [153.41877129746223]
We study how to discover interpretable equivariances from data.
Specifically, we formulate this discovery process as an optimization problem over a model's parameter-sharing schemes.
Also, we theoretically analyze the method for Gaussian data and provide a bound on the mean squared gap between the studied discovery scheme and the oracle scheme.
arXiv Detail & Related papers (2022-04-07T17:59:19Z) - A Generic Self-Supervised Framework of Learning Invariant Discriminative
Features [9.614694312155798]
This paper proposes a generic SSL framework based on a constrained self-labelling assignment process.
The proposed training strategy outperforms a majority of state-of-the-art representation learning methods based on AE structures.
arXiv Detail & Related papers (2022-02-14T18:09:43Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.