The Effect of Pointer Analysis on Semantic Conflict Detection
- URL: http://arxiv.org/abs/2507.20081v1
- Date: Sat, 26 Jul 2025 23:37:16 GMT
- Title: The Effect of Pointer Analysis on Semantic Conflict Detection
- Authors: Matheus Barbosa, Paulo Borba, Rodrigo Bonifácio, Victor Lira, Galileu Santos,
- Abstract summary: Current merge tools don't detect semantic conflicts, which occur when changes from different developers are textually integrated but semantically interfere with each other.<n>To understand whether such false positives could be reduced by using pointer analysis in the implementation of semantic conflict static analyses, we conduct an empirical study.<n>We implement the same analysis with and without pointer analysis, run them on two datasets, observe how often they differ, and compare their accuracy and computational performance.
- Score: 1.8156923266875906
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current merge tools don't detect semantic conflicts, which occur when changes from different developers are textually integrated but semantically interfere with each other. Although researchers have proposed static analyses for detecting semantic conflicts, these analyses suffer from significant false positive rates. To understand whether such false positives could be reduced by using pointer analysis in the implementation of semantic conflict static analyses, we conduct an empirical study. We implement the same analysis with and without pointer analysis, run them on two datasets, observe how often they differ, and compare their accuracy and computational performance. Although pointer analysis is known to improve precision in static analysis, we find that its effect on semantic conflict detection can be drastic: we observe a significant reduction in timeouts and false positives, but also a significant increase in false negatives, with prohibitive drops in recall and F1-score. These results suggest that, in the context of semantic conflict detection, we should explore hybrid analysis techniques, combining aspects of both implementations we compare in our study.
Related papers
- Checkification: A Practical Approach for Testing Static Analysis Truths [0.0]
We propose a method for testing abstract interpretation-based static analyzers.<n>The main advantage of our approach lies in its simplicity, which stems directly from framing it within the Ciao assertion-based validation framework.<n>We have applied our approach to the CiaoPP static analyzer, resulting in the identification of many bugs with reasonable overhead.
arXiv Detail & Related papers (2025-01-21T12:38:04Z) - Contrastive Factor Analysis [70.02770079785559]
This paper introduces a novel Contrastive Factor Analysis framework.
It aims to leverage factor analysis's advantageous properties within the realm of contrastive learning.
To further leverage the interpretability properties of non-negative factor analysis, it is extended to a non-negative version.
arXiv Detail & Related papers (2024-07-31T16:52:00Z) - SLIFER: Investigating Performance and Robustness of Malware Detection Pipelines [12.940071285118451]
academia focuses on combining static and dynamic analysis within a single or ensemble of models.<n>In this paper, we investigate the properties of malware detectors built with multiple and different types of analysis.<n>As far as we know, we are the first to investigate the properties of sequential malware detectors, shedding light on their behavior in real production environment.
arXiv Detail & Related papers (2024-05-23T12:06:10Z) - Supporting Error Chains in Static Analysis for Precise Evaluation
Results and Enhanced Usability [2.8557828838739527]
Static analyses tend to report where a vulnerability manifests rather than the fix location.
This can cause presumed false positives or imprecise results.
We designed an adaption of an existing static analysis algorithm that can distinguish between a manifestation and fix location.
arXiv Detail & Related papers (2024-03-12T16:46:29Z) - Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity
Analysis Methods for Time-Series Deep Learning Models [0.0]
This work undertakes studies to evaluate Interpretability Methods for Time-Series Deep Learning.
My work will investigate perturbation-based sensitivity Analysis methods on modern Transformer models to benchmark their performances.
arXiv Detail & Related papers (2024-01-29T19:51:50Z) - Detecting Semantic Conflicts using Static Analysis [1.201626478128059]
We propose a technique that explores the use of static analysis to detect interference when merging contributions from two developers.
We evaluate our technique using a dataset of 99 experimental units extracted from merge scenarios.
arXiv Detail & Related papers (2023-10-06T14:13:16Z) - Understanding and Mitigating Spurious Correlations in Text
Classification with Neighborhood Analysis [69.07674653828565]
Machine learning models have a tendency to leverage spurious correlations that exist in the training set but may not hold true in general circumstances.
In this paper, we examine the implications of spurious correlations through a novel perspective called neighborhood analysis.
We propose a family of regularization methods, NFL (doN't Forget your Language) to mitigate spurious correlations in text classification.
arXiv Detail & Related papers (2023-05-23T03:55:50Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Provable Guarantees for Self-Supervised Deep Learning with Spectral
Contrastive Loss [72.62029620566925]
Recent works in self-supervised learning have advanced the state-of-the-art by relying on the contrastive learning paradigm.
Our work analyzes contrastive learning without assuming conditional independence of positive pairs.
We propose a loss that performs spectral decomposition on the population augmentation graph and can be succinctly written as a contrastive learning objective.
arXiv Detail & Related papers (2021-06-08T07:41:02Z) - Incremental False Negative Detection for Contrastive Learning [95.68120675114878]
We introduce a novel incremental false negative detection for self-supervised contrastive learning.
During contrastive learning, we discuss two strategies to explicitly remove the detected false negatives.
Our proposed method outperforms other self-supervised contrastive learning frameworks on multiple benchmarks within a limited compute.
arXiv Detail & Related papers (2021-06-07T15:29:14Z) - A Statistical Analysis of Summarization Evaluation Metrics using
Resampling Methods [60.04142561088524]
We find that the confidence intervals are rather wide, demonstrating high uncertainty in how reliable automatic metrics truly are.
Although many metrics fail to show statistical improvements over ROUGE, two recent works, QAEval and BERTScore, do in some evaluation settings.
arXiv Detail & Related papers (2021-03-31T18:28:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.