Flow Sensitivity without Control Flow Graph: An Efficient Andersen-Style Flow-Sensitive Pointer Analysis
- URL: http://arxiv.org/abs/2508.01974v1
- Date: Mon, 04 Aug 2025 01:20:54 GMT
- Title: Flow Sensitivity without Control Flow Graph: An Efficient Andersen-Style Flow-Sensitive Pointer Analysis
- Authors: Jiahao Zhang, Xiao Cheng, Yuxiang Lei,
- Abstract summary: Flow-sensitive pointer analysis is extensively used in alias analysis, taint analysis, program understanding, compiler optimization, etc.<n>Existing flow-sensitive pointer analysis approaches, which are conducted based on control flow graphs, have significantly advanced the precision of pointer analysis.<n>We present CG-FSPTA, a flow-sensitive pointer analysis to overcome the inefficiency of control-flow-graph-based analysis.
- Score: 4.513381877696149
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Flow-sensitive pointer analysis constitutes an essential component of precise program analysis for accurately modeling pointer behaviors by incorporating control flows. Flow-sensitive pointer analysis is extensively used in alias analysis, taint analysis, program understanding, compiler optimization, etc. Existing flow-sensitive pointer analysis approaches, which are conducted based on control flow graphs, have significantly advanced the precision of pointer analysis via sophisticated techniques to leverage control flow information. However, they inevitably suffer from computational inefficiencies when resolving points-to information due to the inherent complex structures of control flow graphs. We present CG-FSPTA, a Flow-Sensitive Constraint Graph (FSConsG) based flow-sensitive pointer analysis to overcome the inefficiency of control-flow-graph-based analysis. CG-FSPTA uses a flow-sensitive variant to leverage the structural advantages of set-constraint graphs (which are commonly used in flow-insensitive pointer analysis) while keeping the flow sensitivity of variable definitions and uses, allowing the incorporation of sophisticated graph optimization and dynamic solving techniques. In this way, CG-FSPTA achieves significant efficiency improvements while keeping the precision of flow-sensitive analysis. Experimental evaluations on benchmark programs demonstrate that CG-FSPTA, significantly reduces both memory usage and execution time while maintaining precision. In particular, by solving in the FSConsG, CG-FSPTA achieves an average memory reduction of 33.05\% and accelerates flow-sensitive pointer analysis by 7.27x compared to the state-of-art method. These experimental results underscore the efficacy of CG-FSPTA as a scalable solution to analyze large-scale software systems, establishing a robust foundation for future advancements in efficient program analysis frameworks.
Related papers
- FSX: Message Flow Sensitivity Enhanced Structural Explainer for Graph Neural Networks [1.6916040234975795]
We propose a novel framework that combines internal message flows of the model with a cooperative game approach applied to external graph data.<n> FSX achieves superior explanation fidelity with significantly reduced runtime.<n>It provides unprecedented insights into the structural logic underlying model predictions.
arXiv Detail & Related papers (2026-01-21T07:39:42Z) - Efficient Differentiable Causal Discovery via Reliable Super-Structure Learning [51.20606796019663]
We propose ALVGL, a novel and general enhancement to the differentiable causal discovery pipeline.<n>ALVGL employs a sparse and low-rank decomposition to learn the precision matrix of the data.<n>We show that ALVGL not only achieves state-of-the-art accuracy but also significantly improves optimization efficiency.
arXiv Detail & Related papers (2026-01-09T02:18:59Z) - Efficiency vs. Fidelity: A Comparative Analysis of Diffusion Probabilistic Models and Flow Matching on Low-Resource Hardware [0.0]
Denoising Diffusion Probabilistic Models (DDPMs) have established a new state-of-the-art in generative image synthesis.<n>This study presents a comparative analysis of DDPMs against the emerging Flow Matching paradigm.
arXiv Detail & Related papers (2025-11-24T18:19:42Z) - Beyond the Ideal: Analyzing the Inexact Muon Update [54.70108543057578]
We show first analysis of the inexactized update at Muon's core.<n>We reveal a fundamental coupling between this inexactness and the optimal step size and momentum.
arXiv Detail & Related papers (2025-10-22T18:01:07Z) - Boosting Pointer Analysis With Large Language Model-Enhanced Allocation Function Detection [17.94389997355635]
Existing approaches largely overlook custom allocators, leading to coarse aliasing and reduced analysis precision.<n>We present AFD, a novel technique that enhances pointer analysis by automatically identifying and modeling custom allocation functions.<n>We evaluate AFD on 15 real-world C projects, identifying over 600 custom AFs.
arXiv Detail & Related papers (2025-09-26T16:08:58Z) - Research on the Application of Spark Streaming Real-Time Data Analysis System and large language model Intelligent Agents [1.4582633500696451]
This study explores the integration of Agent AI with LangGraph to enhance real-time data analysis systems in big data environments.<n>The proposed framework overcomes limitations of static, inefficient stateful computations, and lack of human intervention.<n>System architecture incorporates Apache Spark Streaming, Kafka, and LangGraph to create a high-performance sentiment analysis system.
arXiv Detail & Related papers (2024-12-10T05:51:11Z) - Understanding Optimization in Deep Learning with Central Flows [53.66160508990508]
We show that an RMS's implicit behavior can be explicitly captured by a "central flow:" a differential equation.
We show that these flows can empirically predict long-term optimization trajectories of generic neural networks.
arXiv Detail & Related papers (2024-10-31T17:58:13Z) - DiffGrad for Physics-Informed Neural Networks [0.0]
Burgers' equation, a fundamental equation in fluid dynamics that is extensively used in PINNs, provides flexible results with the Adamprop.
This paper introduces a novel strategy for solving Burgers' equation by incorporating DiffGrad with PINNs.
arXiv Detail & Related papers (2024-09-05T04:39:35Z) - LLMDFA: Analyzing Dataflow in Code with Large Language Models [8.92611389987991]
This paper presents LLMDFA, a compilation-free and customizable dataflow analysis framework.
We decompose the problem into several subtasks and introduce a series of novel strategies.
On average, LLMDFA achieves 87.10% precision and 80.77% recall, surpassing existing techniques with F1 score improvements of up to 0.35.
arXiv Detail & Related papers (2024-02-16T15:21:35Z) - OptFlow: Fast Optimization-based Scene Flow Estimation without
Supervision [6.173968909465726]
We present OptFlow, a fast optimization-based scene flow estimation method.
It achieves state-of-the-art performance for scene flow estimation on popular autonomous driving benchmarks.
arXiv Detail & Related papers (2024-01-04T21:47:56Z) - A Graph Encoder-Decoder Network for Unsupervised Anomaly Detection [7.070726553564701]
We propose an unsupervised graph encoder-decoder model to detect abnormal nodes from graphs.
In the encoding stage, we design a novel pooling mechanism, named LCPool, to find a cluster assignment matrix.
In the decoding stage, we propose an unpooling operation, called LCUnpool, to reconstruct both the structure and nodal features of the original graph.
arXiv Detail & Related papers (2023-08-15T13:49:12Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - Revisiting PGD Attacks for Stability Analysis of Large-Scale Nonlinear
Systems and Perception-Based Control [2.2725929250900947]
We tailor the projected gradient descent (PGD) method developed in the adversarial learning community as a general-purpose ROA analysis tool.
We show that the ROA analysis can be approximated as a constrained problem whose goal is to find the worst-case initial condition.
We present two PGD-based iterative methods which can be used to solve the resultant constrained problem.
arXiv Detail & Related papers (2022-01-03T18:46:58Z) - GMFlow: Learning Optical Flow via Global Matching [124.57850500778277]
We propose a GMFlow framework for learning optical flow estimation.
It consists of three main components: a customized Transformer for feature enhancement, a correlation and softmax layer for global feature matching, and a self-attention layer for flow propagation.
Our new framework outperforms 32-iteration RAFT's performance on the challenging Sintel benchmark.
arXiv Detail & Related papers (2021-11-26T18:59:56Z) - Learning Dynamic Compact Memory Embedding for Deformable Visual Object
Tracking [82.34356879078955]
We propose a compact memory embedding to enhance the discrimination of the segmentation-based deformable visual tracking method.
Our method outperforms the excellent segmentation-based trackers, i.e., D3S and SiamMask on DAVIS 2017 benchmark.
arXiv Detail & Related papers (2021-11-23T03:07:12Z) - Self Normalizing Flows [65.73510214694987]
We propose a flexible framework for training normalizing flows by replacing expensive terms in the gradient by learned approximate inverses at each layer.
This reduces the computational complexity of each layer's exact update from $mathcalO(D3)$ to $mathcalO(D2)$.
We show experimentally that such models are remarkably stable and optimize to similar data likelihood values as their exact gradient counterparts.
arXiv Detail & Related papers (2020-11-14T09:51:51Z) - What Matters in Unsupervised Optical Flow [51.45112526506455]
We compare and analyze a set of key components in unsupervised optical flow.
We construct a number of novel improvements to unsupervised flow models.
We present a new unsupervised flow technique that significantly outperforms the previous state-of-the-art.
arXiv Detail & Related papers (2020-06-08T19:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.