PARF: An Adaptive Abstraction-Strategy Tuner for Static Analysis
- URL: http://arxiv.org/abs/2505.13229v3
- Date: Mon, 09 Jun 2025 07:30:10 GMT
- Title: PARF: An Adaptive Abstraction-Strategy Tuner for Static Analysis
- Authors: Zhongyi Wang, Mingshuai Chen, Tengjie Lin, Linyu Yang, Junhao Zhuo, Qiuye Wang, Shengchao Qin, Xiao Yi, Jianwei Yin,
- Abstract summary: Parf is a toolkit for adaptively tuning abstraction strategies of static program analyzers.<n>It is implemented on top of Frama-C/Eva - an off-the-shelf open-source static analyzer for C programs.
- Score: 12.761648473972873
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We launch Parf - a toolkit for adaptively tuning abstraction strategies of static program analyzers in a fully automated manner. Parf models various types of external parameters (encoding abstraction strategies) as random variables subject to probability distributions over latticed parameter spaces. It incrementally refines the probability distributions based on accumulated intermediate results generated by repeatedly sampling and analyzing, thereby ultimately yielding a set of highly accurate abstraction strategies. Parf is implemented on top of Frama-C/Eva - an off-the-shelf open-source static analyzer for C programs. Parf provides a web-based user interface facilitating the intuitive configuration of static analyzers and visualization of dynamic distribution refinement of the abstraction strategies. It further supports the identification of dominant parameters in Frama-C/Eva analysis. Benchmark experiments and a case study demonstrate the competitive performance of Parf for analyzing complex, large-scale real-world programs.
Related papers
- Matching-Based Few-Shot Semantic Segmentation Models Are Interpretable by Design [8.993770750003673]
Few-Shot Semantic (FSS) models achieve strong performance in segmenting novel classes with minimal labeled examples.<n>This paper introduces the first dedicated method for interpreting matching-based FSS models.<n>Our Affinity Explainer approach extracts attribution maps that highlight which pixels in support images contribute most to query segmentation predictions.
arXiv Detail & Related papers (2025-11-22T19:22:10Z) - Boosting Pointer Analysis With Large Language Model-Enhanced Allocation Function Detection [17.94389997355635]
Existing approaches largely overlook custom allocators, leading to coarse aliasing and reduced analysis precision.<n>We present AFD, a novel technique that enhances pointer analysis by automatically identifying and modeling custom allocation functions.<n>We evaluate AFD on 15 real-world C projects, identifying over 600 custom AFs.
arXiv Detail & Related papers (2025-09-26T16:08:58Z) - Sampling from Gaussian Processes: A Tutorial and Applications in Global Sensitivity Analysis and Optimization [2.6999000177990924]
We present two notable sampling methods for generating posterior samples from Gaussian processes (GPs)<n>We detail how the generated samples can be applied in GSA, single-objective optimization, and multi-objective optimization.
arXiv Detail & Related papers (2025-07-19T20:36:38Z) - Continual Adaptation: Environment-Conditional Parameter Generation for Object Detection in Dynamic Scenarios [54.58186816693791]
environments constantly change over time and space, posing significant challenges for object detectors trained based on a closed-set assumption.<n>We propose a new mechanism, converting the fine-tuning process to a specific- parameter generation.<n>In particular, we first design a dual-path LoRA-based domain-aware adapter that disentangles features into domain-invariant and domain-specific components.
arXiv Detail & Related papers (2025-06-30T17:14:12Z) - ALoRE: Efficient Visual Adaptation via Aggregating Low Rank Experts [71.91042186338163]
ALoRE is a novel PETL method that reuses the hypercomplex parameterized space constructed by Kronecker product to Aggregate Low Rank Experts.<n>Thanks to the artful design, ALoRE maintains negligible extra parameters and can be effortlessly merged into the frozen backbone.
arXiv Detail & Related papers (2024-12-11T12:31:30Z) - Unified Convergence Analysis for Score-Based Diffusion Models with Deterministic Samplers [49.1574468325115]
We introduce a unified convergence analysis framework for deterministic samplers.
Our framework achieves iteration complexity of $tilde O(d2/epsilon)$.
We also provide a detailed analysis of Denoising Implicit Diffusion Models (DDIM)-type samplers.
arXiv Detail & Related papers (2024-10-18T07:37:36Z) - PCF-Lift: Panoptic Lifting by Probabilistic Contrastive Fusion [80.79938369319152]
We design a new pipeline coined PCF-Lift based on our Probabilis-tic Contrastive Fusion (PCF)
Our PCF-lift not only significantly outperforms the state-of-the-art methods on widely used benchmarks including the ScanNet dataset and the Messy Room dataset (4.4% improvement of scene-level PQ)
arXiv Detail & Related papers (2024-10-14T16:06:59Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - A Closer Look at Parameter-Efficient Tuning in Diffusion Models [39.52999446584842]
Large-scale diffusion models like Stable Diffusion are powerful and find various real-world applications.
We investigate parameter-efficient tuning in large diffusion models by inserting small learnable modules.
arXiv Detail & Related papers (2023-03-31T16:23:29Z) - Smoothness Analysis for Probabilistic Programs with Application to
Optimised Variational Inference [13.836565669337057]
We present a static analysis for discovering differentiable or more generally smooth parts of a given probabilistic program.
We show how the analysis can be used to improve the pathwise gradient estimator.
arXiv Detail & Related papers (2022-08-22T18:18:32Z) - Finite-Sum Coupled Compositional Stochastic Optimization: Theory and
Applications [43.48388050033774]
This paper provides a comprehensive analysis of a simple algorithm for both non- and convex objectives.
Our analysis also exhibits new insights for improving the practical implementation by sampling batches of equal size for the outer and inner levels.
arXiv Detail & Related papers (2022-02-24T22:39:35Z) - PnP-DETR: Towards Efficient Visual Analysis with Transformers [146.55679348493587]
Recently, DETR pioneered the solution vision tasks with transformers, it directly translates the image feature map into the object result.
Recent transformer-based image recognition model andTT show consistent efficiency gain.
arXiv Detail & Related papers (2021-09-15T01:10:30Z) - Revisiting the Sample Complexity of Sparse Spectrum Approximation of
Gaussian Processes [60.479499225746295]
We introduce a new scalable approximation for Gaussian processes with provable guarantees which hold simultaneously over its entire parameter space.
Our approximation is obtained from an improved sample complexity analysis for sparse spectrum Gaussian processes (SSGPs)
arXiv Detail & Related papers (2020-11-17T05:41:50Z) - Finding Influential Instances for Distantly Supervised Relation
Extraction [42.94953922808431]
This work proposes a novel model-agnostic instance sampling method for Distant supervision (DS) by influence function (IF)
Our method identifies favorable/unfavorable instances in the bag based on IF, then does dynamic instance sampling.
Experiments show that REIF is able to win over a series of baselines that have complicated architectures.
arXiv Detail & Related papers (2020-09-17T02:02:07Z) - Controlling for sparsity in sparse factor analysis models: adaptive
latent feature sharing for piecewise linear dimensionality reduction [2.896192909215469]
We propose a simple and tractable parametric feature allocation model which can address key limitations of current latent feature decomposition techniques.
We derive a novel adaptive Factor analysis (aFA), as well as, an adaptive probabilistic principle component analysis (aPPCA) capable of flexible structure discovery and dimensionality reduction.
We show that aPPCA and aFA can infer interpretable high level features both when applied on raw MNIST and when applied for interpreting autoencoder features.
arXiv Detail & Related papers (2020-06-22T16:09:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.