Code Detection for Hardware Acceleration Using Large Language Models
- URL: http://arxiv.org/abs/2307.10348v1
- Date: Wed, 19 Jul 2023 17:21:58 GMT
- Title: Code Detection for Hardware Acceleration Using Large Language Models
- Authors: Pablo Antonio Mart\'inez and Gregorio Bernab\'e and Jos\'e Manuel
Garc\'ia
- Abstract summary: This work presents the first analysis of code detection using large language models (LLMs)
We propose both a preliminary, naive prompt and a novel prompting strategy for code detection.
Results reveal that conventional prompting achieves great precision but poor accuracy (68.8%, 22.3%, and 79.2% for GEMM, convolution, and FFT, respectively) due to a high number of false positives.
Our novel prompting strategy substantially reduces false positives, resulting in excellent overall accuracy (91.1%, 97.9%, and 99.7%, respectively)
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large language models (LLMs) have been massively applied to many tasks, often
surpassing state-of-the-art approaches. While their effectiveness in code
generation has been extensively studied (e.g., AlphaCode), their potential for
code detection remains unexplored.
This work presents the first analysis of code detection using LLMs. Our study
examines essential kernels, including matrix multiplication, convolution, and
fast-fourier transform, implemented in C/C++. We propose both a preliminary,
naive prompt and a novel prompting strategy for code detection.
Results reveal that conventional prompting achieves great precision but poor
accuracy (68.8%, 22.3%, and 79.2% for GEMM, convolution, and FFT, respectively)
due to a high number of false positives. Our novel prompting strategy
substantially reduces false positives, resulting in excellent overall accuracy
(91.1%, 97.9%, and 99.7%, respectively). These results pose a considerable
challenge to existing state-of-the-art code detection methods.
Related papers
- Large Language Models as Code Executors: An Exploratory Study [29.545321608864295]
This paper pioneers the exploration of Large Language Models (LLMs) as code executors.
We are the first to examine this feasibility across various LLMs, including OpenAI's o1, GPT-4o, GPT-3.5, DeepSeek, and Qwen-Coder.
We introduce an Iterative Instruction Prompting (IIP) technique that processes code snippets line by line, enhancing the accuracy of weaker models by an average of 7.22%.
arXiv Detail & Related papers (2024-10-09T08:23:22Z) - StagedVulBERT: Multi-Granular Vulnerability Detection with a Novel Pre-trained Code Model [13.67394549308693]
This study introduces StagedVulBERT, a novel vulnerability detection framework.
CodeBERT-HLS component is designed to capture semantics at both the token and statement levels simultaneously.
In coarse-grained vulnerability detection, StagedVulBERT achieves an F1 score of 92.26%, marking a 6.58% improvement over the best-performing methods.
arXiv Detail & Related papers (2024-10-08T07:46:35Z) - LLM Agents Improve Semantic Code Search [6.047454623201181]
We introduce the approach of using Retrieval Augmented Generation powered agents to inject information into user prompts.
By utilizing RAG, agents enhance user queries with relevant details from GitHub repositories, making them more informative and contextually aligned.
Experimental results on the CodeSearchNet dataset demonstrate that RepoRift significantly outperforms existing methods.
arXiv Detail & Related papers (2024-08-05T00:43:56Z) - Graspness Discovery in Clutters for Fast and Accurate Grasp Detection [57.81325062171676]
"graspness" is a quality based on geometry cues that distinguishes graspable areas in cluttered scenes.
We develop a neural network named cascaded graspness model to approximate the searching process.
Experiments on a large-scale benchmark, GraspNet-1Billion, show that our method outperforms previous arts by a large margin.
arXiv Detail & Related papers (2024-06-17T02:06:47Z) - Bridging the Gap Between End-to-End and Two-Step Text Spotting [88.14552991115207]
Bridging Text Spotting is a novel approach that resolves the error accumulation and suboptimal performance issues in two-step methods.
We demonstrate the effectiveness of the proposed method through extensive experiments.
arXiv Detail & Related papers (2024-04-06T13:14:04Z) - FoC: Figure out the Cryptographic Functions in Stripped Binaries with LLMs [54.27040631527217]
We propose a novel framework called FoC to Figure out the Cryptographic functions in stripped binaries.
FoC-BinLLM outperforms ChatGPT by 14.61% on the ROUGE-L score.
FoC-Sim outperforms the previous best methods with a 52% higher Recall@1.
arXiv Detail & Related papers (2024-03-27T09:45:33Z) - Zero-Shot Detection of Machine-Generated Codes [83.0342513054389]
This work proposes a training-free approach for the detection of LLMs-generated codes.
We find that existing training-based or zero-shot text detectors are ineffective in detecting code.
Our method exhibits robustness against revision attacks and generalizes well to Java codes.
arXiv Detail & Related papers (2023-10-08T10:08:21Z) - Asteria-Pro: Enhancing Deep-Learning Based Binary Code Similarity
Detection by Incorporating Domain Knowledge [8.93208472340743]
We propose a novel deep learning enhancement architecture by incorporating domain knowledge-based pre-filtration and re-ranking modules.
Asteria-Pro manages to detect 1,482 vulnerable functions with a high precision 91.65%.
arXiv Detail & Related papers (2023-01-02T03:16:26Z) - Enhancing Semantic Code Search with Multimodal Contrastive Learning and
Soft Data Augmentation [50.14232079160476]
We propose a new approach with multimodal contrastive learning and soft data augmentation for code search.
We conduct extensive experiments to evaluate the effectiveness of our approach on a large-scale dataset with six programming languages.
arXiv Detail & Related papers (2022-04-07T08:49:27Z) - Efficient Few-Shot Object Detection via Knowledge Inheritance [62.36414544915032]
Few-shot object detection (FSOD) aims at learning a generic detector that can adapt to unseen tasks with scarce training samples.
We present an efficient pretrain-transfer framework (PTF) baseline with no computational increment.
We also propose an adaptive length re-scaling (ALR) strategy to alleviate the vector length inconsistency between the predicted novel weights and the pretrained base weights.
arXiv Detail & Related papers (2022-03-23T06:24:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.