Visual Diagrammatic Queries in ViziQuer: Overview and Implementation
- URL: http://arxiv.org/abs/2304.14825v1
- Date: Thu, 27 Apr 2023 13:16:32 GMT
- Title: Visual Diagrammatic Queries in ViziQuer: Overview and Implementation
- Authors: J\=ulija Ov\v{c}i\c{n}\c{n}ikiva, Agris \v{S}ostaks, K\=arlis
\v{C}er\=ans
- Abstract summary: ViziQuer is a visual query notation and tool offering visual diagrammatic means for describing rich data queries.
We describe the conceptual and technical solutions that allow mapping of the visual diagrammatic query notation into the textual SPARQL language.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graphs (KG) have become an important data organization paradigm.
The available textual query languages for information retrieval from KGs, as
SPARQL for RDF-structured data, do not provide means for involving
non-technical experts in the data access process. Visual query formalisms,
alongside form-based and natural language-based ones, offer means for easing
user involvement in the data querying process. ViziQuer is a visual query
notation and tool offering visual diagrammatic means for describing rich data
queries, involving optional and negation constructs, as well as aggregation and
subqueries. In this paper we review the visual ViziQuer notation from the
end-user point of view and describe the conceptual and technical solutions
(including abstract syntax model, followed by a generation model for textual
queries) that allow mapping of the visual diagrammatic query notation into the
textual SPARQL language, thus enabling the execution of rich visual queries
over the actual knowledge graphs. The described solutions demonstrate the
viability of the model-based approach in translating complex visual notation
into a complex textual one; they serve as semantics by implementation
description of the ViziQuer language and provide building blocks for further
services in the ViziQuer tool context.
Related papers
- Seeing Through Words: Controlling Visual Retrieval Quality with Language Models [68.49490036960559]
We propose a new paradigm of quality-controllable retrieval, which enriches short queries with contextual details while incorporating explicit notions of image quality.<n>Our key idea is to leverage a generative language model as a query completion function, extending underspecified queries into descriptive forms.<n>Our proposed approach significantly improves retrieval results and provides effective quality control, bridging the gap between the expressive capacity of modern VLMs and the underspecified nature of short user queries.
arXiv Detail & Related papers (2026-02-24T18:20:57Z) - VizGen: Data Exploration and Visualization from Natural Language via a Multi-Agent AI Architecture [0.0]
VizGen is an AI-assisted graph generation system that empowers users to create meaningful visualizations using natural language.<n>Built on a multi-agent architecture, VizGen handlessql generation, graph creation, customization, and insight extraction.
arXiv Detail & Related papers (2025-09-26T11:31:00Z) - Difference Views for Visual Graph Query Building [3.331215924738821]
Knowledge Graphs (KGs) contain vast amounts of linked resources that encode knowledge in various domains, which can be queried and searched for using languages like SPARQL.<n>Existing visual query builders enable non-expert users to construct SPARQL queries and utilize the knowledge contained in these graphs.
arXiv Detail & Related papers (2025-08-07T12:14:33Z) - Adaptive Markup Language Generation for Contextually-Grounded Visual Document Understanding [42.506971197471195]
We introduce two fine-grained structured datasets: DocMark-Pile, comprising approximately 3.8M pretraining data pairs for document parsing, and DocMark-Instruct, featuring 624k fine-tuning data annotations for grounded instruction following.<n>Our proposed model significantly outperforms existing state-of-theart MLLMs across a range of visual document understanding benchmarks.
arXiv Detail & Related papers (2025-05-08T17:37:36Z) - OnSET: Ontology and Semantic Exploration Toolkit [5.1293983340834055]
We propose a Semantic system, Ontology and Exploration Toolkit (OnSET)
OnSET allows non-expert users to easily build queries with visual user guidance provided by topic modelling and semantic search.
OnSET combines efficient and open platforms to deploy the system on commodity hardware.
arXiv Detail & Related papers (2025-04-11T09:18:06Z) - Augmenting a Large Language Model with a Combination of Text and Visual Data for Conversational Visualization of Global Geospatial Data [51.57559025799189]
We present a method for augmenting a Large Language Model (LLM) with a combination of text and visual data.
We address this problem by merging a text description of a visualization and dataset with snapshots of the visualization.
arXiv Detail & Related papers (2025-01-16T13:16:37Z) - Knowledge-Aware Query Expansion with Large Language Models for Textual and Relational Retrieval [49.42043077545341]
We propose a knowledge-aware query expansion framework, augmenting LLMs with structured document relations from knowledge graph (KG)
We leverage document texts as rich KG node representations and use document-based relation filtering for our Knowledge-Aware Retrieval (KAR)
arXiv Detail & Related papers (2024-10-17T17:03:23Z) - A large collection of bioinformatics question-query pairs over federated knowledge graphs: methodology and applications [0.0838491111002084]
We introduce a large collection of human-written natural language questions and their corresponding SPARQL queries over federated bioinformatics knowledge graphs.
We propose a methodology to uniformly represent the examples with minimal metadata, based on existing standards.
arXiv Detail & Related papers (2024-10-08T13:08:07Z) - QueryBuilder: Human-in-the-Loop Query Development for Information Retrieval [12.543590253664492]
We present a novel, interactive system called $textitQueryBuilder$.
It allows a novice, English-speaking user to create queries with a small amount of effort.
It rapidly develops cross-lingual information retrieval queries corresponding to the user's information needs.
arXiv Detail & Related papers (2024-09-07T00:46:58Z) - GQE: Generalized Query Expansion for Enhanced Text-Video Retrieval [56.610806615527885]
This paper introduces a novel data-centric approach, Generalized Query Expansion (GQE), to address the inherent information imbalance between text and video.
By adaptively segmenting videos into short clips and employing zero-shot captioning, GQE enriches the training dataset with comprehensive scene descriptions.
GQE achieves state-of-the-art performance on several benchmarks, including MSR-VTT, MSVD, LSMDC, and VATEX.
arXiv Detail & Related papers (2024-08-14T01:24:09Z) - Improving Retrieval-augmented Text-to-SQL with AST-based Ranking and Schema Pruning [10.731045939849125]
We focus on Text-to- semantic parsing from the perspective of retrieval-augmented generation.
Motivated by challenges related to the size of commercial database schemata and the deployability of business intelligence solutions, we propose $textASTReS$ that dynamically retrieves input database information.
arXiv Detail & Related papers (2024-07-03T15:55:14Z) - Syntax Tree Constrained Graph Network for Visual Question Answering [14.059645822205718]
Visual Question Answering (VQA) aims to automatically answer natural language questions related to given image content.
We propose a novel Syntax Tree Constrained Graph Network (STCGN) for VQA based on entity message passing and syntax tree.
We then design a message-passing mechanism for phrase-aware visual entities and capture entity features according to a given visual context.
arXiv Detail & Related papers (2023-09-17T07:03:54Z) - Decomposing Complex Queries for Tip-of-the-tongue Retrieval [72.07449449115167]
Complex queries describe content elements (e.g., book characters or events), information beyond the document text.
This retrieval setting, called tip of the tongue (TOT), is especially challenging for models reliant on lexical and semantic overlap between query and document text.
We introduce a simple yet effective framework for handling such complex queries by decomposing the query into individual clues, routing those as sub-queries to specialized retrievers, and ensembling the results.
arXiv Detail & Related papers (2023-05-24T11:43:40Z) - Improving Text-to-SQL Semantic Parsing with Fine-grained Query
Understanding [84.04706075621013]
We present a general-purpose, modular neural semantic parsing framework based on token-level fine-grained query understanding.
Our framework consists of three modules: named entity recognizer (NER), neural entity linker (NEL) and neural entity linker (NSP)
arXiv Detail & Related papers (2022-09-28T21:00:30Z) - Towards Complex Document Understanding By Discrete Reasoning [77.91722463958743]
Document Visual Question Answering (VQA) aims to understand visually-rich documents to answer questions in natural language.
We introduce a new Document VQA dataset, named TAT-DQA, which consists of 3,067 document pages and 16,558 question-answer pairs.
We develop a novel model named MHST that takes into account the information in multi-modalities, including text, layout and visual image, to intelligently address different types of questions.
arXiv Detail & Related papers (2022-07-25T01:43:19Z) - Tree-Augmented Cross-Modal Encoding for Complex-Query Video Retrieval [98.62404433761432]
The rapid growth of user-generated videos on the Internet has intensified the need for text-based video retrieval systems.
Traditional methods mainly favor the concept-based paradigm on retrieval with simple queries.
We propose a Tree-augmented Cross-modal.
method by jointly learning the linguistic structure of queries and the temporal representation of videos.
arXiv Detail & Related papers (2020-07-06T02:50:27Z) - Dependently Typed Knowledge Graphs [4.157595789003928]
We show how standardized semantic web technologies (RDF and its query language SPARQL) can be reproduced in a unified manner with dependent type theory.
In addition to providing the basic functionalities of knowledge graphs, dependent types add expressiveness in encoding both entities and queries.
arXiv Detail & Related papers (2020-03-08T14:04:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.