Taming the Expressiveness and Programmability of Graph Analytical
Queries
- URL: http://arxiv.org/abs/2004.09045v2
- Date: Wed, 30 Sep 2020 07:28:06 GMT
- Title: Taming the Expressiveness and Programmability of Graph Analytical
Queries
- Authors: Lu Qin, Longbin Lai, Kongzhang Hao, Zhongxin Zhou, Yiwei Zhao, Yuxing
Han, Xuemin Lin, Zhengping Qian, Jingren Zhou
- Abstract summary: Graph database has enjoyed a boom in the last decade, and graph queries gain a lot of attentions.
We focus on analytical queries in this paper.
Motivated by this, we propose the flash DSL, which is named after the three primitive operators Filter, LocAl and PuSH.
- Score: 74.65487393973993
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph database has enjoyed a boom in the last decade, and graph queries
accordingly gain a lot of attentions from both the academia and industry. We
focus on analytical queries in this paper. While analyzing existing
domain-specific languages (DSLs) for analytical queries regarding the
perspectives of completeness, expressiveness and programmability, we find out
that none of existing work has achieved a satisfactory coverage of these
perspectives. Motivated by this, we propose the \flash DSL, which is named
after the three primitive operators Filter, LocAl and PuSH. We prove that
\flash is Turing complete (completeness), and show that it achieves both good
expressiveness and programmability for analytical queries. We provide an
implementation of \flash based on code generation, and compare it with native
C++ codes and existing DSL using representative queries. The experiment results
demonstrate \flash's expressiveness, and its capability of programming complex
algorithms that achieve satisfactory runtime.
Related papers
- Instance-Aware Graph Prompt Learning [71.26108600288308]
We introduce Instance-Aware Graph Prompt Learning (IA-GPL) in this paper.
The process involves generating intermediate prompts for each instance using a lightweight architecture.
Experiments conducted on multiple datasets and settings showcase the superior performance of IA-GPL compared to state-of-the-art baselines.
arXiv Detail & Related papers (2024-11-26T18:38:38Z) - GraphReader: Building Graph-based Agent to Enhance Long-Context Abilities of Large Language Models [58.08177466768262]
Long-context capabilities are essential for large language models (LLMs) to tackle complex and long-input tasks.
We introduce GraphReader, a graph-based agent system designed to handle long texts by structuring them into a graph and employing an agent to explore this graph autonomously.
Experimental results on the LV-Eval dataset reveal that GraphReader, using a 4k context window, consistently outperforms GPT-4-128k across context lengths from 16k to 256k by a large margin.
arXiv Detail & Related papers (2024-06-20T17:57:51Z) - Localized RETE for Incremental Graph Queries [1.3858051019755282]
We propose an extension semantics that enables local yet fully incremental execution graph queries.
The proposed technique can significantly improve performance regarding memory consumption and execution time in favorable cases, but may incur a noticeable linear overhead unfavorable cases.
arXiv Detail & Related papers (2024-05-02T10:00:37Z) - Graph Prompt Learning: A Comprehensive Survey and Beyond [24.64987655155218]
This paper presents a pioneering survey on the emerging domain of graph prompts in Artificial General Intelligence (AGI)
We propose a unified framework for understanding graph prompt learning, offering clarity on prompt tokens, token structures, and insertion patterns in the graph domain.
A comprehensive taxonomy categorizes over 100 works in this field, aligning them with pre-training tasks across node-level, edge-level, and graph-level objectives.
arXiv Detail & Related papers (2023-11-28T05:36:59Z) - Visual In-Context Prompting [100.93587329049848]
In this paper, we introduce a universal visual in-context prompting framework for both vision tasks like open-set segmentation and detection.
We build on top of an encoder-decoder architecture, and develop a versatile prompt encoder to support a variety of prompts like strokes, boxes, and points.
Our extensive explorations show that the proposed visual in-context prompting elicits extraordinary referring and generic segmentation capabilities.
arXiv Detail & Related papers (2023-11-22T18:59:48Z) - PRODIGY: Enabling In-context Learning Over Graphs [112.19056551153454]
In-context learning is the ability of a pretrained model to adapt to novel and diverse downstream tasks.
We develop PRODIGY, the first pretraining framework that enables in-context learning over graphs.
arXiv Detail & Related papers (2023-05-21T23:16:30Z) - Neural Graph Reasoning: Complex Logical Query Answering Meets Graph
Databases [63.96793270418793]
Complex logical query answering (CLQA) is a recently emerged task of graph machine learning.
We introduce the concept of Neural Graph Database (NGDBs)
NGDB consists of a Neural Graph Storage and a Neural Graph Engine.
arXiv Detail & Related papers (2023-03-26T04:03:37Z) - Domain Specific Question Answering Over Knowledge Graphs Using Logical
Programming and Large Language Models [10.258158633354686]
Our approach integrates classic logical programming languages into large language models (LLMs)
Our experimental results demonstrate that our method achieves accurate identification of correct answer entities for all test questions, even when trained on a small fraction of annotated data.
arXiv Detail & Related papers (2023-03-03T20:35:38Z) - Momentum Decoding: Open-ended Text Generation As Graph Exploration [49.812280360794894]
Open-ended text generation with autoregressive language models (LMs) is one of the core tasks in natural language processing.
We formulate open-ended text generation from a new perspective, i.e., we view it as an exploration process within a directed graph.
We propose a novel decoding method -- textitmomentum decoding -- which encourages the LM to explore new nodes outside the current graph.
arXiv Detail & Related papers (2022-12-05T11:16:47Z) - Interactive Visual Pattern Search on Graph Data via Graph Representation
Learning [20.795511688640296]
We propose a visual analytics system GraphQ to support human-in-the-loop, example-based, subgraph pattern search.
To support fast, interactive queries, we use graph neural networks (GNNs) to encode a graph as fixed-length latent vector representation.
We also propose a novel GNN for node-alignment called NeuroAlign to facilitate easy validation and interpretation of the query results.
arXiv Detail & Related papers (2022-02-18T22:30:28Z) - How to Design Sample and Computationally Efficient VQA Models [53.65668097847456]
We find that representing the text as probabilistic programs and images as object-level scene graphs best satisfy these desiderata.
We extend existing models to leverage these soft programs and scene graphs to train on question answer pairs in an end-to-end manner.
arXiv Detail & Related papers (2021-03-22T01:48:16Z) - Interpretable Neural Computation for Real-World Compositional Visual
Question Answering [4.3668650778541895]
We build an interpretable framework for real-world compositional VQA.
In our framework, images and questions are disentangled into scene graphs and programs, and a symbolic program runs on them with full transparency to select the attention regions.
Experiments conducted on the GQA benchmark demonstrate that our framework achieves the compositional prior arts and competitive accuracy among monolithic ones.
arXiv Detail & Related papers (2020-10-10T05:46:22Z) - HyperBench: A Benchmark and Tool for Hypergraphs and Empirical Findings [8.37315177713779]
We develop a repository of decomposition software and a workbench for inserting, analyzing, and retrieving hypergraphs.
We also develop a new benchmark of hypergraphs stemming from disparate CQ and CSP collections.
We describe a number of actual experiments we carried out with this new infrastructure.
arXiv Detail & Related papers (2020-09-02T13:08:55Z) - Weakly Supervised Visual Semantic Parsing [49.69377653925448]
Scene Graph Generation (SGG) aims to extract entities, predicates and their semantic structure from images.
Existing SGG methods require millions of manually annotated bounding boxes for training.
We propose Visual Semantic Parsing, VSPNet, and graph-based weakly supervised learning framework.
arXiv Detail & Related papers (2020-01-08T03:46:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.