SEQ-GPT: LLM-assisted Spatial Query via Example
- URL: http://arxiv.org/abs/2508.10486v1
- Date: Thu, 14 Aug 2025 09:41:55 GMT
- Title: SEQ-GPT: LLM-assisted Spatial Query via Example
- Authors: Ivan Khai Ze Lim, Ningyi Liao, Yiming Yang, Gerald Wei Yong Yip, Siqiang Luo,
- Abstract summary: We introduce SEQ-GPT, a spatial query system powered by Large Language Models (LLMs)<n>LLMs enable interactive operations in the SEQ process, including asking users to clarify query details and dynamically adjusting the search based on user feedback.<n> SEQ-GPT offers an end-to-end demonstration for broadening spatial search with realistic data and application scenarios.
- Score: 31.748396191422383
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Contemporary spatial services such as online maps predominantly rely on user queries for location searches. However, the user experience is limited when performing complex tasks, such as searching for a group of locations simultaneously. In this study, we examine the extended scenario known as Spatial Exemplar Query (SEQ), where multiple relevant locations are jointly searched based on user-specified examples. We introduce SEQ-GPT, a spatial query system powered by Large Language Models (LLMs) towards more versatile SEQ search using natural language. The language capabilities of LLMs enable unique interactive operations in the SEQ process, including asking users to clarify query details and dynamically adjusting the search based on user feedback. We also propose a tailored LLM adaptation pipeline that aligns natural language with structured spatial data and queries through dialogue synthesis and multi-model cooperation. SEQ-GPT offers an end-to-end demonstration for broadening spatial search with realistic data and application scenarios.
Related papers
- LLM-based Semantic Search for Conversational Queries in E-commerce [1.3645712130536118]
We present an LLM-based semantic search framework that captures user intent from conversational queries.<n>Our framework achieves strong precision and recall across various settings compared to baseline approaches on a real-world dataset.
arXiv Detail & Related papers (2026-01-23T06:35:28Z) - From Questions to Queries: An AI-powered Multi-Agent Framework for Spatial Text-to-SQL [0.4499833362998488]
Single-agent approaches often struggle with the semantic and syntactic complexities of spatial queries.<n>We propose a multi-agent framework designed to accurately translate natural language questions into spatial queries.<n>We evaluate our system using both the non-spatial KaggleDBQA benchmark and a new, comprehensive SpatialQA benchmark.
arXiv Detail & Related papers (2025-10-23T22:58:17Z) - DeepMMSearch-R1: Empowering Multimodal LLMs in Multimodal Web Search [61.77858432092777]
We present DeepMMSearch-R1, the first multimodal large language model capable of performing on-demand, multi-turn web searches.<n>DeepMMSearch-R1 can initiate web searches based on relevant crops of the input image making the image search more effective.<n>We conduct extensive experiments across a range of knowledge-intensive benchmarks to demonstrate the superiority of our approach.
arXiv Detail & Related papers (2025-10-14T17:59:58Z) - Show or Tell? Modeling the evolution of request-making in Human-LLM conversations [14.896858577447093]
We create and analyze a dataset of 211k real-world queries based on WildChat.<n>We find significant differences in the language for request-making in the human-LLM scenario.<n>We find that query patterns evolve from early ones emphasizing sole requests to combining more context later on.
arXiv Detail & Related papers (2025-08-02T06:08:37Z) - Text-to-SPARQL Goes Beyond English: Multilingual Question Answering Over Knowledge Graphs through Human-Inspired Reasoning [51.203811759364925]
mKGQAgent breaks down the task of converting natural language questions into SPARQL queries into modular, interpretable subtasks.<n> Evaluated on the DBpedia- and Corporate-based KGQA benchmarks within the Text2SPARQL challenge 2025, our approach took first place among the other participants.
arXiv Detail & Related papers (2025-07-22T19:23:03Z) - Leveraging LLMs to Enable Natural Language Search on Go-to-market Platforms [0.23301643766310368]
We implement and evaluate a solution for the Zoominfo product for sellers, which prompts the Large Language Models with natural language.
The intermediary search fields offer numerous advantages for each query, including the elimination of syntax errors.
Comprehensive experiments with closed, open source, and fine-tuned LLM models were conducted to demonstrate the efficacy of our approach.
arXiv Detail & Related papers (2024-11-07T03:58:38Z) - RoundTable: Leveraging Dynamic Schema and Contextual Autocomplete for Enhanced Query Precision in Tabular Question Answering [11.214912072391108]
Real-world datasets often feature a vast array of attributes and complex values.
Traditional methods cannot fully relay the datasets size and complexity to the Large Language Models.
We propose a novel framework that leverages Full-Text Search (FTS) on the input table.
arXiv Detail & Related papers (2024-08-22T13:13:06Z) - UQE: A Query Engine for Unstructured Databases [71.49289088592842]
We investigate the potential of Large Language Models to enable unstructured data analytics.
We propose a new Universal Query Engine (UQE) that directly interrogates and draws insights from unstructured data collections.
arXiv Detail & Related papers (2024-06-23T06:58:55Z) - An Interactive Query Generation Assistant using LLM-based Prompt
Modification and User Feedback [9.461978375200102]
The proposed interface is a novel search interface which supports automatic and interactive query generation over a mono-linguial or multi-lingual document collection.
The interface enables the users to refine the queries generated by different LLMs, to provide feedback on the retrieved documents or passages, and is able to incorporate the users' feedback as prompts to generate more effective queries.
arXiv Detail & Related papers (2023-11-19T04:42:24Z) - Interpreting User Requests in the Context of Natural Language Standing
Instructions [89.12540932734476]
We develop NLSI, a language-to-program dataset consisting of over 2.4K dialogues spanning 17 domains.
A key challenge in NLSI is to identify which subset of the standing instructions is applicable to a given dialogue.
arXiv Detail & Related papers (2023-11-16T11:19:26Z) - Large Search Model: Redefining Search Stack in the Era of LLMs [63.503320030117145]
We introduce a novel conceptual framework called large search model, which redefines the conventional search stack by unifying search tasks with one large language model (LLM)
All tasks are formulated as autoregressive text generation problems, allowing for the customization of tasks through the use of natural language prompts.
This proposed framework capitalizes on the strong language understanding and reasoning capabilities of LLMs, offering the potential to enhance search result quality while simultaneously simplifying the existing cumbersome search stack.
arXiv Detail & Related papers (2023-10-23T05:52:09Z) - Text Summarization with Latent Queries [60.468323530248945]
We introduce LaQSum, the first unified text summarization system that learns Latent Queries from documents for abstractive summarization with any existing query forms.
Under a deep generative framework, our system jointly optimize a latent query model and a conditional language model, allowing users to plug-and-play queries of any type at test time.
Our system robustly outperforms strong comparison systems across summarization benchmarks with different query types, document settings, and target domains.
arXiv Detail & Related papers (2021-05-31T21:14:58Z) - Acoustic span embeddings for multilingual query-by-example search [20.141444548841047]
In low- or zero-resource settings, QbE search is often addressed with approaches based on dynamic time warping (DTW)
Recent work has found that methods based on acoustic word embeddings (AWEs) can improve both performance and search speed.
We generalize AWE training to spans of words, producing acoustic span embeddings (ASE), and explore the application of AWE to arbitrary-length queries in multiple unseen languages.
arXiv Detail & Related papers (2020-11-24T00:28:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.