Refining Decompiled C Code with Large Language Models
- URL: http://arxiv.org/abs/2310.06530v2
- Date: Tue, 28 Nov 2023 19:09:54 GMT
- Title: Refining Decompiled C Code with Large Language Models
- Authors: Wai Kin Wong, Huaijin Wang, Zongjie Li, Zhibo Liu, Shuai Wang, Qiyi
Tang, Sen Nie, Shi Wu
- Abstract summary: A C decompiler converts an executable into source code.
The recovered C source code, once re-compiled, is expected to produce an executable with the same functionality as the original executable.
- Score: 15.76430362775126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A C decompiler converts an executable into source code. The recovered C
source code, once re-compiled, is expected to produce an executable with the
same functionality as the original executable. With over twenty years of
development, C decompilers have been widely used in production to support
reverse engineering applications. Despite the prosperous development of C
decompilers, it is widely acknowledged that decompiler outputs are mainly used
for human consumption, and are not suitable for automatic recompilation. Often,
a substantial amount of manual effort is required to fix the decompiler outputs
before they can be recompiled and executed properly.
This paper is motived by the recent success of large language models (LLMs)
in comprehending dense corpus of natural language. To alleviate the tedious,
costly and often error-prone manual effort in fixing decompiler outputs, we
investigate the feasibility of using LLMs to augment decompiler outputs, thus
delivering recompilable decompilation. Note that different from previous
efforts that focus on augmenting decompiler outputs with higher readability
(e.g., recovering type/variable names), we focus on augmenting decompiler
outputs with recompilability, meaning to generate code that can be recompiled
into an executable with the same functionality as the original executable.
We conduct a pilot study to characterize the obstacles in recompiling the
outputs of the de facto commercial C decompiler -- IDA-Pro. We then propose a
two-step, hybrid approach to augmenting decompiler outputs with LLMs. We
evaluate our approach on a set of popular C test cases, and show that our
approach can deliver a high recompilation success rate to over 75% with
moderate effort, whereas none of the IDA-Pro's original outputs can be
recompiled. We conclude with a discussion on the limitations of our approach
and promising future research directions.
Related papers
- Self-Constructed Context Decompilation with Fined-grained Alignment Enhancement [43.2637367483626]
Decompilation transforms compiled code back into a high-level programming language when source code is unavailable.
Previous work has primarily focused on enhancing decompilation performance by increasing the scale of model parameters or training data for pre-training.
By integrating these two methods, we achieved a Re-Executability performance improvement of approximately 3.90% on the Decompile-Eval benchmark, establishing a new state-of-the-art performance of 52.41%.
arXiv Detail & Related papers (2024-06-25T02:37:53Z) - LLM4Decompile: Decompiling Binary Code with Large Language Models [10.346311290153398]
Decompilation aims to convert binary code to high-level source code, but traditional tools like Ghidra often produce results difficult to read and execute.
We propose LLM4Decompile, the first and largest open-source LLM series (1.3B to 33B) trained to decompile binary code.
The resulting models significantly outperform GPT-4o and Ghidra on the HumanEval and ExeBench benchmarks by over 100% in terms of re-executability rate.
arXiv Detail & Related papers (2024-03-08T13:10:59Z) - StepCoder: Improve Code Generation with Reinforcement Learning from
Compiler Feedback [58.20547418182074]
We introduce StepCoder, a novel framework for code generation, consisting of two main components.
CCCS addresses the exploration challenge by breaking the long sequences code generation task into a Curriculum of Code Completion Subtasks.
FGO only optimize the model by masking the unexecuted code segments to provide Fine-Grained Optimization.
Our method improves the ability to explore the output space and outperforms state-of-the-art approaches in corresponding benchmarks.
arXiv Detail & Related papers (2024-02-02T13:14:31Z) - ReGAL: Refactoring Programs to Discover Generalizable Abstractions [59.05769810380928]
Generalizable Abstraction Learning (ReGAL) is a method for learning a library of reusable functions via codeization.
We find that the shared function libraries discovered by ReGAL make programs easier to predict across diverse domains.
For CodeLlama-13B, ReGAL results in absolute accuracy increases of 11.5% on LOGO, 26.1% on date understanding, and 8.1% on TextCraft, outperforming GPT-3.5 in two of three domains.
arXiv Detail & Related papers (2024-01-29T18:45:30Z) - Revisiting Deep Learning for Variable Type Recovery [3.075963833361584]
DIRTY is a Transformer-based-Decoder architecture capable of augmenting decompiled code with variable names and types.
We extend the original DIRTY results by re-training the DIRTY model on a dataset produced by the open-source Ghidra decompiler.
arXiv Detail & Related papers (2023-04-07T22:28:28Z) - Beyond the C: Retargetable Decompilation using Neural Machine
Translation [5.734661402742406]
We develop a prototype decompiler that is easily retargetable to new languages.
We examine the impact of parameters such as tokenization and training data selection on the quality of decompilation.
We will release our training data, trained decompilation models, and code to help encourage future research into language-agnostic decompilation.
arXiv Detail & Related papers (2022-12-17T20:45:59Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z) - Improving type information inferred by decompilers with supervised
machine learning [0.0]
In software reverse engineering, decompilation is the process of recovering source code from binary files.
We build different classification models capable of inferring the high-level type returned by functions.
Our system is able to predict function return types with a 79.1% F1-measure, whereas the best decompiler obtains a 30% F1-measure.
arXiv Detail & Related papers (2021-01-19T11:45:46Z) - Extending C++ for Heterogeneous Quantum-Classical Computing [56.782064931823015]
qcor is a language extension to C++ and compiler implementation that enables heterogeneous quantum-classical programming, compilation, and execution in a single-source context.
Our work provides a first-of-its-kind C++ compiler enabling high-level quantum kernel (function) expression in a quantum-language manner.
arXiv Detail & Related papers (2020-10-08T12:49:07Z) - PolyDL: Polyhedral Optimizations for Creation of High Performance DL
primitives [55.79741270235602]
We present compiler algorithms to automatically generate high performance implementations of Deep Learning primitives.
We develop novel data reuse analysis algorithms using the polyhedral model.
We also show that such a hybrid compiler plus a minimal library-use approach results in state-of-the-art performance.
arXiv Detail & Related papers (2020-06-02T06:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.