Can Large Language Models Generate Geospatial Code?
- URL: http://arxiv.org/abs/2410.09738v2
- Date: Fri, 18 Oct 2024 02:49:24 GMT
- Title: Can Large Language Models Generate Geospatial Code?
- Authors: Shuyang Hou, Zhangxiao Shen, Jianyuan Liang, Anqi Zhao, Zhipeng Gui, Rui Li, Huayi Wu,
- Abstract summary: This paper introduces GeoCode-Eval, a framework for assessing large language models' ability to generate geospatial code.
We developed a benchmark dataset, GeoCode-Bench, consisting of 5,000 multiple-choice, 1,500 fill-in-the-blank, 1,500 true/false questions, and 1,000 subjective tasks.
Using GeoCode-Bench, we evaluated three commercial closed-source LLMs, four open-source general-purpose LLMs, and 14 specialized code generation models.
- Score: 1.8449647922514607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the growing demand for spatiotemporal data processing and geospatial modeling, automating geospatial code generation has become essential for productivity. Large language models (LLMs) show promise in code generation but face challenges like domain-specific knowledge gaps and "coding hallucinations." This paper introduces GeoCode-Eval (GCE), a framework for assessing LLMs' ability to generate geospatial code across three dimensions: "Cognition and Memory," "Comprehension and Interpretation," and "Innovation and Creation," distributed across eight capability levels. We developed a benchmark dataset, GeoCode-Bench, consisting of 5,000 multiple-choice, 1,500 fill-in-the-blank, 1,500 true/false questions, and 1,000 subjective tasks covering code summarization, generation, completion, and correction. Using GeoCode-Bench, we evaluated three commercial closed-source LLMs, four open-source general-purpose LLMs, and 14 specialized code generation models. We also conducted experiments on few-shot and zero-shot learning, Chain of Thought reasoning, and multi-round majority voting to measure their impact on geospatial code generation. Additionally, we fine-tuned the Code LLaMA-7B model using Google Earth Engine-related JavaScript, creating GEECode-GPT, and evaluated it on subjective tasks. Results show that constructing pre-training and instruction datasets significantly improves code generation, offering insights for optimizing LLMs in specific domains.
Related papers
- Chain-of-Programming (CoP) : Empowering Large Language Models for Geospatial Code Generation [2.6026969939746705]
This paper proposes a Chain of Programming framework to decompose the code generation process into five steps.
The framework incorporates a shared information pool, knowledge base retrieval, and user feedback mechanisms.
It significantly improves the logical clarity, syntactical correctness, and executability of the generated code.
arXiv Detail & Related papers (2024-11-16T09:20:35Z) - CodeTree: Agent-guided Tree Search for Code Generation with Large Language Models [106.11371409170818]
Large language models (LLMs) can act as agents with capabilities to self-refine and improve generated code autonomously.
We propose CodeTree, a framework for LLM agents to efficiently explore the search space in different stages of the code generation process.
Specifically, we adopted a unified tree structure to explicitly explore different coding strategies, generate corresponding coding solutions, and subsequently refine the solutions.
arXiv Detail & Related papers (2024-11-07T00:09:54Z) - Automatic Generation of Benchmarks and Reliable LLM Judgment for Code Tasks [0.8274693573069442]
This work introduces a methodology to generate and evaluate LaaJ implementations, utilizing an automatically generated benchmark.
The benchmark is used both to develop and validate the LaaJs and to validate and test the LLM code related solution using the LaaJs.
Our approach enables the creation of high quality code task solutions.
arXiv Detail & Related papers (2024-10-28T14:34:36Z) - Geo-FuB: A Method for Constructing an Operator-Function Knowledge Base for Geospatial Code Generation Tasks Using Large Language Models [0.5242869847419834]
This study introduces a framework to construct such a knowledge base, leveraging geospatial script semantics.
An example knowledge base, Geo-FuB, built from 154,075 Google Earth Engine scripts, is available on GitHub.
arXiv Detail & Related papers (2024-10-28T12:50:27Z) - GeoCode-GPT: A Large Language Model for Geospatial Code Generation Tasks [1.7687829461198472]
This paper presents and open-sources the GeoCode-PT and GeoCode-SFT corpora, along with the GeoCode-Eval evaluation dataset.
By leveraging QRA and LoRA for pretraining and fine-tuning, we introduce GeoCode-GPT-7B, the first LLM focused on geospatial code generation.
Experimental results show that GeoCode-GPT outperforms other models in multiple-choice accuracy by 9.1% to 32.1%, in code summarization ability by 5.4% to 21.7%, and in code generation capability by 1.2% to 25.1%.
arXiv Detail & Related papers (2024-10-22T13:57:55Z) - Evaluation of Code LLMs on Geospatial Code Generation [1.6834474847800562]
Large Language Models (LLMs) can generate Python code for data science and machine learning applications.
Here, we show how we constructed an evaluation benchmark for code generation models, based on a selection of geospatial tasks.
Our dataset will hopefully contribute to the development new models capable of solving geospatial coding tasks with high accuracy.
arXiv Detail & Related papers (2024-10-06T20:34:03Z) - CodeRAG-Bench: Can Retrieval Augment Code Generation? [78.37076502395699]
We conduct a systematic, large-scale analysis of code generation using retrieval-augmented generation.
We first curate a comprehensive evaluation benchmark, CodeRAG-Bench, encompassing three categories of code generation tasks.
We examine top-performing models on CodeRAG-Bench by providing contexts retrieved from one or multiple sources.
arXiv Detail & Related papers (2024-06-20T16:59:52Z) - VersiCode: Towards Version-controllable Code Generation [58.82709231906735]
Large Language Models (LLMs) have made tremendous strides in code generation, but existing research fails to account for the dynamic nature of software development.
We propose two novel tasks aimed at bridging this gap: version-specific code completion (VSCC) and version-aware code migration (VACM)
We conduct an extensive evaluation on VersiCode, which reveals that version-controllable code generation is indeed a significant challenge.
arXiv Detail & Related papers (2024-06-11T16:15:06Z) - CodeGRAG: Bridging the Gap between Natural Language and Programming Language via Graphical Retrieval Augmented Generation [58.84212778960507]
We propose CodeGRAG, a Graphical Retrieval Augmented Code Generation framework to enhance the performance of LLMs.
CodeGRAG builds the graphical view of code blocks based on the control flow and data flow of them to fill the gap between programming languages and natural language.
Various experiments and ablations are done on four datasets including both the C++ and python languages to validate the hard meta-graph prompt, the soft prompting technique, and the effectiveness of the objectives for pretrained GNN expert.
arXiv Detail & Related papers (2024-05-03T02:48:55Z) - InfiBench: Evaluating the Question-Answering Capabilities of Code Large Language Models [56.723509505549536]
InfiBench is the first large-scale freeform question-answering (QA) benchmark for code to our knowledge.
It comprises 234 carefully selected high-quality Stack Overflow questions that span across 15 programming languages.
We conduct a systematic evaluation for over 100 latest code LLMs on InfiBench, leading to a series of novel and insightful findings.
arXiv Detail & Related papers (2024-03-11T02:06:30Z) - GeoGalactica: A Scientific Large Language Model in Geoscience [95.15911521220052]
Large language models (LLMs) have achieved huge success for their general knowledge and ability to solve a wide spectrum of tasks in natural language processing (NLP)
We specialize an LLM into geoscience, by further pre-training the model with a vast amount of texts in geoscience, as well as supervised fine-tuning (SFT) the resulting model with our custom collected instruction tuning dataset.
We train GeoGalactica over a geoscience-related text corpus containing 65 billion tokens, preserving as the largest geoscience-specific text corpus.
Then we fine-tune the model with 1 million pairs of instruction-tuning
arXiv Detail & Related papers (2023-12-31T09:22:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.