SYNFIX: Automatically Fixing Syntax Errors using Compiler Diagnostics
- URL: http://arxiv.org/abs/2104.14671v1
- Date: Thu, 29 Apr 2021 21:57:44 GMT
- Title: SYNFIX: Automatically Fixing Syntax Errors using Compiler Diagnostics
- Authors: Toufique Ahmed, Noah Rose Ledesma, Premkumar Devanbu
- Abstract summary: Students could be helped, and instructors' time saved, by automated repair suggestions when dealing with syntax errors.
We introduce SYNFIX, a machine-learning based tool that substantially improves on the state-of-the-art.
We have built SYNFIX into a free, open-source version of Visual Studio Code; we make all our source code and models freely available.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Beginning programmers struggle with the complex grammar of modern programming
languages like Java, and make lot of syntax errors. The diagnostic syntax error
messages from compilers and IDEs are sometimes useful, but often the messages
are cryptic and puzzling. Students could be helped, and instructors' time
saved, by automated repair suggestions when dealing with syntax errors. Large
samples of student errors and fixes are now available, offering the possibility
of data-driven machine-learning approaches to help students fix syntax errors.
Current machine-learning approaches do a reasonable job fixing syntax errors in
shorter programs, but don't work as well even for moderately longer programs.
We introduce SYNFIX, a machine-learning based tool that substantially improves
on the state-of-the-art, by learning to use compiler diagnostics, employing a
very large neural model that leverages unsupervised pre-training, and relying
on multi-label classification rather than autoregressive synthesis to generate
the (repaired) output. We describe SYNFIX's architecture in detail, and provide
a detailed evaluation. We have built SYNFIX into a free, open-source version of
Visual Studio Code; we make all our source code and models freely available.
Related papers
- Multi-Task Program Error Repair and Explanatory Diagnosis [28.711745671275477]
We present a novel machine-learning approach for Multi-task Program Error Repair and Explanatory Diagnosis (mPRED)
A pre-trained language model is used to encode the source code, and a downstream model is specifically designed to identify and repair errors.
To aid in visualizing and analyzing the program structure, we use a graph neural network for program structure visualization.
arXiv Detail & Related papers (2024-10-09T05:09:24Z) - LILO: Learning Interpretable Libraries by Compressing and Documenting Code [71.55208585024198]
We introduce LILO, a neurosymbolic framework that iteratively synthesizes, compresses, and documents code.
LILO combines LLM-guided program synthesis with recent algorithmic advances in automated from Stitch.
We find that AutoDoc boosts performance by helping LILO's synthesizer to interpret and deploy learned abstractions.
arXiv Detail & Related papers (2023-10-30T17:55:02Z) - How Helpful do Novice Programmers Find the Feedback of an Automated
Repair Tool? [1.2990666399718034]
We describe our experience of using CLARA, an automated repair tool, to provide feedback to novices.
First, we extended CLARA to support a larger subset of the Python language, before integrating it with the Jupyter Notebooks used for our programming exercises.
We found that novices often struggled to understand the proposed repairs, echoing the well-known challenge to understand compiler/interpreter messages.
arXiv Detail & Related papers (2023-10-02T07:45:56Z) - Guess & Sketch: Language Model Guided Transpilation [59.02147255276078]
Learned transpilation offers an alternative to manual re-writing and engineering efforts.
Probabilistic neural language models (LMs) produce plausible outputs for every input, but do so at the cost of guaranteed correctness.
Guess & Sketch extracts alignment and confidence information from features of the LM then passes it to a symbolic solver to resolve semantic equivalence.
arXiv Detail & Related papers (2023-09-25T15:42:18Z) - Dcc --help: Generating Context-Aware Compiler Error Explanations with
Large Language Models [53.04357141450459]
dcc --help was deployed to our CS1 and CS2 courses, with 2,565 students using the tool over 64,000 times in ten weeks.
We found that the LLM-generated explanations were conceptually accurate in 90% of compile-time and 75% of run-time cases, but often disregarded the instruction not to provide solutions in code.
arXiv Detail & Related papers (2023-08-23T02:36:19Z) - A Static Evaluation of Code Completion by Large Language Models [65.18008807383816]
Execution-based benchmarks have been proposed to evaluate functional correctness of model-generated code on simple programming problems.
static analysis tools such as linters, which can detect errors without running the program, haven't been well explored for evaluating code generation models.
We propose a static evaluation framework to quantify static errors in Python code completions, by leveraging Abstract Syntax Trees.
arXiv Detail & Related papers (2023-06-05T19:23:34Z) - Using Large Language Models to Enhance Programming Error Messages [5.903720638984496]
Large language models can be used to create useful enhancements to programming error messages.
We discuss the benefits and downsides of large language models and highlight future streams of research for enhancing programming error messages.
arXiv Detail & Related papers (2022-10-20T23:17:26Z) - Repairing Bugs in Python Assignments Using Large Language Models [9.973714032271708]
We propose to use a large language model trained on code to build an APR system for programming assignments.
Our system can fix both syntactic and semantic mistakes by combining multi-modal prompts, iterative querying, test-case-based selection of few-shots, and program chunking.
We evaluate MMAPR on 286 real student programs and compare to a baseline built by combining a state-of-the-art Python syntax repair engine, BIFI, and state-of-the-art Python semantic repair engine for student assignments, Refactory.
arXiv Detail & Related papers (2022-09-29T15:41:17Z) - A Syntax-Guided Grammatical Error Correction Model with Dependency Tree
Correction [83.14159143179269]
Grammatical Error Correction (GEC) is a task of detecting and correcting grammatical errors in sentences.
We propose a syntax-guided GEC model (SG-GEC) which adopts the graph attention mechanism to utilize the syntactic knowledge of dependency trees.
We evaluate our model on public benchmarks of GEC task and it achieves competitive results.
arXiv Detail & Related papers (2021-11-05T07:07:48Z) - Leveraging Language to Learn Program Abstractions and Search Heuristics [66.28391181268645]
We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis.
When integrated into a state-of-the-art library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization.
arXiv Detail & Related papers (2021-06-18T15:08:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.