Bringing freedom in variable choice when searching counter-examples in
floating point programs
- URL: http://arxiv.org/abs/2002.12447v1
- Date: Thu, 27 Feb 2020 21:20:38 GMT
- Title: Bringing freedom in variable choice when searching counter-examples in
floating point programs
- Authors: Heytem Zitoun, Claude Michel, Laurent Michel, Michel Rueher
- Abstract summary: This paper focuses on search strategies for CSPs using floating point numbers constraint systems.
It introduces a new search based on the global number of occurrences that outperforms state-of-the-art strategies.
- Score: 0.5161531917413706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Program verification techniques typically focus on finding counter-examples
that violate properties of a program. Constraint programming offers a
convenient way to verify programs by modeling their state transformations and
specifying searches that seek counter-examples. Floating-point computations
present additional challenges for verification given the semantic subtleties of
floating point arithmetic. % This paper focuses on search strategies for CSPs
using floating point numbers constraint systems and dedicated to program
verification. It introduces a new search heuristic based on the global number
of occurrences that outperforms state-of-the-art strategies. More importantly,
it demonstrates that a new technique that only branches on input variables of
the verified program improve performance. It composes with a diversification
technique that prevents the selection of the same variable within a fixed
horizon further improving performances and reduces disparities between various
variable choice heuristics. The result is a robust methodology that can tailor
the search strategy according to the sought properties of the counter example.
Related papers
- Variable Substitution and Bilinear Programming for Aligning Partially Overlapping Point Sets [48.1015832267945]
This research presents a method to meet requirements through the minimization objective function of the RPM algorithm.
A branch-and-bound (BnB) algorithm is devised, which solely branches over the parameters, thereby boosting convergence rate.
Empirical evaluations demonstrate better robustness of the proposed methodology against non-rigid deformation, positional noise, and outliers, when compared with prevailing state-of-the-art transformations.
arXiv Detail & Related papers (2024-05-14T13:28:57Z) - Spurious Feature Eraser: Stabilizing Test-Time Adaptation for Vision-Language Foundation Model [86.9619638550683]
Vision-language foundation models have exhibited remarkable success across a multitude of downstream tasks due to their scalability on extensive image-text paired data.
However, these models display significant limitations when applied to downstream tasks, such as fine-grained image classification, as a result of decision shortcuts''
arXiv Detail & Related papers (2024-03-01T09:01:53Z) - Weakly Supervised Semantic Parsing with Execution-based Spurious Program
Filtering [19.96076749160955]
We propose a domain-agnostic filtering mechanism based on program execution results.
We run a majority vote on these representations to identify and filter out programs with significantly different semantics from the other programs.
arXiv Detail & Related papers (2023-11-02T11:45:40Z) - Randomized Adversarial Style Perturbations for Domain Generalization [49.888364462991234]
We propose a novel domain generalization technique, referred to as Randomized Adversarial Style Perturbation (RASP)
The proposed algorithm perturbs the style of a feature in an adversarial direction towards a randomly selected class, and makes the model learn against being misled by the unexpected styles observed in unseen target domains.
We evaluate the proposed algorithm via extensive experiments on various benchmarks and show that our approach improves domain generalization performance, especially in large-scale benchmarks.
arXiv Detail & Related papers (2023-04-04T17:07:06Z) - Budgeted Classification with Rejection: An Evolutionary Method with
Multiple Objectives [0.0]
Budgeted, sequential classifiers (BSCs) process inputs through a sequence of partial feature acquisition and evaluation steps.
This allows for an efficient evaluation of inputs that prevents unneeded feature acquisition.
We propose a problem-specific genetic algorithm to build budgeted, sequential classifiers with confidence-based reject options.
arXiv Detail & Related papers (2022-05-01T22:05:16Z) - Flipping the switch on local exploration: Genetic Algorithms with
Reversals [0.0]
Authors show that gradient-free search techniques are suitable for providing an optimal solution in the discrete domain.
They also show that the use of multiple local searches can improve performance on local searches.
It is observed that the proposed GA variants have the least average cost across all benchmarks including the problem proposed and IC performs better than its constituents.
arXiv Detail & Related papers (2022-02-02T08:27:11Z) - Joint Contrastive Learning with Infinite Possibilities [114.45811348666898]
This paper explores useful modifications of the recent development in contrastive learning via novel probabilistic modeling.
We derive a particular form of contrastive loss named Joint Contrastive Learning (JCL)
arXiv Detail & Related papers (2020-09-30T16:24:21Z) - Variance-Reduced Off-Policy Memory-Efficient Policy Search [61.23789485979057]
Off-policy policy optimization is a challenging problem in reinforcement learning.
Off-policy algorithms are memory-efficient and capable of learning from off-policy samples.
arXiv Detail & Related papers (2020-09-14T16:22:46Z) - Change Point Detection in Time Series Data using Autoencoders with a
Time-Invariant Representation [69.34035527763916]
Change point detection (CPD) aims to locate abrupt property changes in time series data.
Recent CPD methods demonstrated the potential of using deep learning techniques, but often lack the ability to identify more subtle changes in the autocorrelation statistics of the signal.
We employ an autoencoder-based methodology with a novel loss function, through which the used autoencoders learn a partially time-invariant representation that is tailored for CPD.
arXiv Detail & Related papers (2020-08-21T15:03:21Z) - Learning Differentiable Programs with Admissible Neural Heuristics [43.54820901841979]
We study the problem of learning differentiable functions expressed as programs in a domain-specific language.
We frame this optimization problem as a search in a weighted graph whose paths encode top-down derivations of program syntax.
Our key innovation is to view various classes of neural networks as continuous relaxations over the space of programs.
arXiv Detail & Related papers (2020-07-23T16:07:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.