ZOGRASCOPE: A New Benchmark for Property Graphs
- URL: http://arxiv.org/abs/2503.05268v1
- Date: Fri, 07 Mar 2025 09:33:30 GMT
- Title: ZOGRASCOPE: A New Benchmark for Property Graphs
- Authors: Francesco Cazzaro, Justin Kleindienst, Sofia Marquez, Ariadna Quattoni,
- Abstract summary: We introduce ZOGRASCOPE, a benchmark designed specifically for the cypher query language.<n>We show that semantic parsing over graphs is still a challenging open problem that can not be solved by prompting LLMs alone.
- Score: 3.0748861313823
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Natural language interfaces to knowledge graphs have become increasingly important in recent years, enabling easy and efficient access to structured data. In particular property graphs have seen growing adoption. However, these kind of graphs remain relatively underrepresented in research, which has focused in large part on RDF-style graphs. As a matter of fact there is a lack of resources for evaluating systems on property graphs, with many existing datasets featuring relatively simple queries. To address this gap, we introduce ZOGRASCOPE, a benchmark designed specifically for the cypher query language. The benchmark includes a diverse set of manually annotated queries of varying complexity. We complement this paper with a set of experiments that test the performance of out-of-the-box LLMs of different sizes. Our experiments show that semantic parsing over graphs is still a challenging open problem that can not be solved by prompting LLMs alone.
Related papers
- GraphSOS: Graph Sampling and Order Selection to Help LLMs Understand Graphs Better [13.742220809751627]
GraphSOS is a novel framework for converting graph data into natural language text.
It features an Order Selector Module to ensure proper serialization order of the graph and a Subgraph Sampling Module to sample subgraphs with better structure for better reasoning.
Experiments on multiple datasets for node classification and graph question-answering demonstrate that GraphSOS improves LLMs' performance and ability on graph tasks.
arXiv Detail & Related papers (2025-01-24T11:55:57Z) - CypherBench: Towards Precise Retrieval over Full-scale Modern Knowledge Graphs in the LLM Era [4.369550829556578]
We introduce CypherBench, the first benchmark with 11 large-scale, multi-domain property graphs with 7.8 million entities and over 10,000 questions.<n>We propose property graph views on top of the underlying RDF graph that can be efficiently queried by LLMs using Cypher.
arXiv Detail & Related papers (2024-12-24T23:22:04Z) - Can LLMs Convert Graphs to Text-Attributed Graphs? [35.53046810556242]
We propose Topology-Aware Node description Synthesis (TANS) to convert existing graphs into text-attributed graphs.<n>We evaluate our TANS on text-rich, text-limited, and text-free graphs, demonstrating its applicability.
arXiv Detail & Related papers (2024-12-13T13:32:59Z) - What Do LLMs Need to Understand Graphs: A Survey of Parametric Representation of Graphs [69.48708136448694]
Large language models (LLMs) are reorganizing in the AI community for their expected reasoning and inference abilities.<n>We believe this kind of parametric representation of graphs, graph laws, can be a solution for making LLMs understand graph data as the input.
arXiv Detail & Related papers (2024-10-16T00:01:31Z) - How Do Large Language Models Understand Graph Patterns? A Benchmark for Graph Pattern Comprehension [53.6373473053431]
This work introduces a benchmark to assess large language models' capabilities in graph pattern tasks.
We have developed a benchmark that evaluates whether LLMs can understand graph patterns based on either terminological or topological descriptions.
Our benchmark encompasses both synthetic and real datasets, and a variety of models, with a total of 11 tasks and 7 models.
arXiv Detail & Related papers (2024-10-04T04:48:33Z) - Can Large Language Models Analyze Graphs like Professionals? A Benchmark, Datasets and Models [88.4320775961431]
We introduce ProGraph, a benchmark for large language models (LLMs) to process graphs.<n>Our findings reveal that the performance of current LLMs is unsatisfactory, with the best model achieving only 36% accuracy.<n>We propose LLM4Graph datasets, which include crawled documents and auto-generated codes based on 6 widely used graph libraries.
arXiv Detail & Related papers (2024-09-29T11:38:45Z) - Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs [60.71360240206726]
Large language models (LLMs) suffer from hallucinations, especially on knowledge-intensive tasks.
Existing works propose to augment LLMs with individual text units retrieved from external knowledge corpora.
We propose a framework called Graph Chain-of-thought (Graph-CoT) to augment LLMs with graphs by encouraging LLMs to reason on the graph iteratively.
arXiv Detail & Related papers (2024-04-10T15:41:53Z) - LLaGA: Large Language and Graph Assistant [73.71990472543027]
Large Language and Graph Assistant (LLaGA) is an innovative model to handle the complexities of graph-structured data.
LLaGA excels in versatility, generalizability and interpretability, allowing it to perform consistently well across different datasets and tasks.
Our experiments show that LLaGA delivers outstanding performance across four datasets and three tasks using one single model.
arXiv Detail & Related papers (2024-02-13T02:03:26Z) - G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering [61.93058781222079]
We develop a flexible question-answering framework targeting real-world textual graphs.
We introduce the first retrieval-augmented generation (RAG) approach for general textual graphs.
G-Retriever performs RAG over a graph by formulating this task as a Prize-Collecting Steiner Tree optimization problem.
arXiv Detail & Related papers (2024-02-12T13:13:04Z) - Beyond Text: A Deep Dive into Large Language Models' Ability on
Understanding Graph Data [13.524529952170672]
Large language models (LLMs) have achieved impressive performance on many natural language processing tasks.
We aim to assess whether LLMs can effectively process graph data and leverage topological structures to enhance performance.
By comparing LLMs' performance with specialized graph models, we offer insights into the strengths and limitations of employing LLMs for graph analytics.
arXiv Detail & Related papers (2023-10-07T23:25:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.