Leveraging Causal Inference for Explainable Automatic Program Repair
- URL: http://arxiv.org/abs/2205.13342v1
- Date: Thu, 26 May 2022 13:25:33 GMT
- Title: Leveraging Causal Inference for Explainable Automatic Program Repair
- Authors: Jianzong Wang, Shijing Si, Zhitao Zhu, Xiaoyang Qu, Zhenhou Hong, Jing
Xiao
- Abstract summary: This paper presents an interpretable approach for program repair based on sequence-to-sequence models with causal inference.
Our method is called CPR, short for causal program repair.
Experiments on four programming languages show that CPR can generate causal graphs for reasonable interpretations.
- Score: 24.146216081282798
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models have made significant progress in automatic program
repair. However, the black-box nature of these methods has restricted their
practical applications. To address this challenge, this paper presents an
interpretable approach for program repair based on sequence-to-sequence models
with causal inference and our method is called CPR, short for causal program
repair. Our CPR can generate explanations in the process of decision making,
which consists of groups of causally related input-output tokens. Firstly, our
method infers these relations by querying the model with inputs disturbed by
data augmentation. Secondly, it generates a graph over tokens from the
responses and solves a partitioning problem to select the most relevant
components. The experiments on four programming languages (Java, C, Python, and
JavaScript) show that CPR can generate causal graphs for reasonable
interpretations and boost the performance of bug fixing in automatic program
repair.
Related papers
- Multi-Task Program Error Repair and Explanatory Diagnosis [28.711745671275477]
We present a novel machine-learning approach for Multi-task Program Error Repair and Explanatory Diagnosis (mPRED)
A pre-trained language model is used to encode the source code, and a downstream model is specifically designed to identify and repair errors.
To aid in visualizing and analyzing the program structure, we use a graph neural network for program structure visualization.
arXiv Detail & Related papers (2024-10-09T05:09:24Z) - Flexible Control Flow Graph Alignment for Delivering Data-Driven
Feedback to Novice Programming Learners [0.847136673632881]
We present several modifications to CLARA, a data-driven automated repair approach that is open source.
We extend CLARA's abstract syntax tree processor to handle common introductory programming constructs.
We modify an incorrect program's control flow graph to match the correct programs to apply CLARA's original repair process.
arXiv Detail & Related papers (2024-01-02T19:56:50Z) - Fact-Checking Complex Claims with Program-Guided Reasoning [99.7212240712869]
Program-Guided Fact-Checking (ProgramFC) is a novel fact-checking model that decomposes complex claims into simpler sub-tasks.
We first leverage the in-context learning ability of large language models to generate reasoning programs.
We execute the program by delegating each sub-task to the corresponding sub-task handler.
arXiv Detail & Related papers (2023-05-22T06:11:15Z) - System Predictor: Grounding Size Estimator for Logic Programs under
Answer Set Semantics [0.5801044612920815]
We present the system Predictor for estimating the grounding size of programs.
We evaluate the impact of Predictor when used as a guide for rewritings produced by the answer set programming rewriting tools Projector and Lpopt.
arXiv Detail & Related papers (2023-03-29T20:49:40Z) - NAPG: Non-Autoregressive Program Generation for Hybrid Tabular-Textual
Question Answering [52.10214317661547]
Current numerical reasoning methods autoregressively decode program sequences.
The accuracy of program generation drops sharply as the decoding steps unfold due to error propagation.
In this paper, we propose a non-autoregressive program generation framework.
arXiv Detail & Related papers (2022-11-07T11:25:21Z) - Learning from Self-Sampled Correct and Partially-Correct Programs [96.66452896657991]
We propose to let the model perform sampling during training and learn from both self-sampled fully-correct programs and partially-correct programs.
We show that our use of self-sampled correct and partially-correct programs can benefit learning and help guide the sampling process.
Our proposed method improves the pass@k performance by 3.1% to 12.3% compared to learning from a single reference program with MLE.
arXiv Detail & Related papers (2022-05-28T03:31:07Z) - Enforcing Consistency in Weakly Supervised Semantic Parsing [68.2211621631765]
We explore the use of consistency between the output programs for related inputs to reduce the impact of spurious programs.
We find that a more consistent formalism leads to improved model performance even without consistency-based training.
arXiv Detail & Related papers (2021-07-13T03:48:04Z) - Graph-based, Self-Supervised Program Repair from Diagnostic Feedback [108.48853808418725]
We introduce a program-feedback graph, which connects symbols relevant to program repair in source code and diagnostic feedback.
We then apply a graph neural network on top to model the reasoning process.
We present a self-supervised learning paradigm for program repair that leverages unlabeled programs available online.
arXiv Detail & Related papers (2020-05-20T07:24:28Z) - Can We Learn Heuristics For Graphical Model Inference Using
Reinforcement Learning? [114.24881214319048]
We show that we can learn programs, i.e., policies, for solving inference in higher order Conditional Random Fields (CRFs) using reinforcement learning.
Our method solves inference tasks efficiently without imposing any constraints on the form of the potentials.
arXiv Detail & Related papers (2020-04-27T19:24:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.