Natural Language Interaction with Databases on Edge Devices in the Internet of Battlefield Things
- URL: http://arxiv.org/abs/2506.06396v1
- Date: Thu, 05 Jun 2025 20:52:13 GMT
- Title: Natural Language Interaction with Databases on Edge Devices in the Internet of Battlefield Things
- Authors: Christopher D. Molek, Roberto Fronteddu, K. Brent Venable, Niranjan Suri,
- Abstract summary: Internet of Battlefield Things (IoBT) gives rise to new opportunities for enhancing situational awareness.<n>To increase the potential of IoBT for situational awareness in critical decision making, the data from these devices must be processed into consumer-ready information objects.<n>We propose a workflow that makes use of natural language processing (NLP) to query a database technology and return a response in natural language.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The expansion of the Internet of Things (IoT) in the battlefield, Internet of Battlefield Things (IoBT), gives rise to new opportunities for enhancing situational awareness. To increase the potential of IoBT for situational awareness in critical decision making, the data from these devices must be processed into consumer-ready information objects, and made available to consumers on demand. To address this challenge we propose a workflow that makes use of natural language processing (NLP) to query a database technology and return a response in natural language. Our solution utilizes Large Language Models (LLMs) that are sized for edge devices to perform NLP as well as graphical databases which are well suited for dynamic connected networks which are pervasive in the IoBT. Our architecture employs LLMs for both mapping questions in natural language to Cypher database queries as well as to summarize the database output back to the user in natural language. We evaluate several medium sized LLMs for both of these tasks on a database representing publicly available data from the US Army's Multipurpose Sensing Area (MSA) at the Jornada Range in Las Cruces, NM. We observe that Llama 3.1 (8 billion parameters) outperforms the other models across all the considered metrics. Most importantly, we note that, unlike current methods, our two step approach allows the relaxation of the Exact Match (EM) requirement of the produced Cypher queries with ground truth code and, in this way, it achieves a 19.4% increase in accuracy. Our workflow lays the ground work for deploying LLMs on edge devices to enable natural language interactions with databases containing information objects for critical decision making.
Related papers
- A New Paradigm of User-Centric Wireless Communication Driven by Large Language Models [53.16213723669751]
Next generation of wireless communications seeks to deeply integrate artificial intelligence with user-centric communication networks.<n>We propose a novel paradigm for wireless communication that innovatively incorporates the nature language to structured query language.<n>We present a prototype system in which a dynamic semantic representation network at the physical layer adapts its encoding depth to meet user requirements.
arXiv Detail & Related papers (2025-04-16T01:43:36Z) - Explainable Multi-Modal Data Exploration in Natural Language via LLM Agent [6.147666891384964]
XMODE is a system that enables explainable, multi-modal data exploration in natural language.<n>XMODE is inspired by a real-world use case that enables users to explore multi-modal information systems.
arXiv Detail & Related papers (2024-12-24T13:42:44Z) - Natural Language Query Engine for Relational Databases using Generative AI [0.0]
This article introduces an innovative solution that leverages Generative AI to bridge the gap, enabling users to query databases using natural language.
Our approach automatically translates natural language queries intosql, ensuring both syntactic and semantic correctness, while also generating clear, natural language responses from the retrieved data.
By streamlining the interaction between users and databases, this method empowers individuals without technical expertise to engage with data directly and efficiently, democratizing access to valuable insights and enhancing productivity.
arXiv Detail & Related papers (2024-09-23T01:07:02Z) - Text2SQL is Not Enough: Unifying AI and Databases with TAG [47.45480855418987]
Table-Augmented Generation (TAG) is a paradigm for answering natural language questions over databases.
We develop benchmarks to study the TAG problem and find that standard methods answer no more than 20% of queries correctly.
arXiv Detail & Related papers (2024-08-27T00:50:14Z) - Relational Database Augmented Large Language Model [59.38841050766026]
Large language models (LLMs) excel in many natural language processing (NLP) tasks.
They can only incorporate new knowledge through training or supervised fine-tuning processes.
This precise, up-to-date, and private information is typically stored in relational databases.
arXiv Detail & Related papers (2024-07-21T06:19:10Z) - Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More? [54.667202878390526]
Long-context language models (LCLMs) have the potential to revolutionize our approach to tasks traditionally reliant on external tools like retrieval systems or databases.
We introduce LOFT, a benchmark of real-world tasks requiring context up to millions of tokens designed to evaluate LCLMs' performance on in-context retrieval and reasoning.
Our findings reveal LCLMs' surprising ability to rival state-of-the-art retrieval and RAG systems, despite never having been explicitly trained for these tasks.
arXiv Detail & Related papers (2024-06-19T00:28:58Z) - Reliable, Adaptable, and Attributable Language Models with Retrieval [144.26890121729514]
Parametric language models (LMs) are trained on vast amounts of web data.
They face practical challenges such as hallucinations, difficulty in adapting to new data distributions, and a lack of verifiability.
We advocate for retrieval-augmented LMs to replace parametric LMs as the next generation of LMs.
arXiv Detail & Related papers (2024-03-05T18:22:33Z) - Augmented Large Language Models with Parametric Knowledge Guiding [72.71468058502228]
Large Language Models (LLMs) have significantly advanced natural language processing (NLP) with their impressive language understanding and generation capabilities.
Their performance may be suboptimal for domain-specific tasks that require specialized knowledge due to limited exposure to the related data.
We propose the novel Parametric Knowledge Guiding (PKG) framework, which equips LLMs with a knowledge-guiding module to access relevant knowledge.
arXiv Detail & Related papers (2023-05-08T15:05:16Z) - xDBTagger: Explainable Natural Language Interface to Databases Using
Keyword Mappings and Schema Graph [0.17188280334580192]
Translating natural language queries into structured query language (NLQ) in interfaces to relational databases is a challenging task.
We propose xDBTagger, an explainable hybrid translation pipeline that explains the decisions made along the way to the user both textually and visually.
xDBTagger is effective in terms of accuracy and translates the queries more efficiently compared to other state-of-the-art pipeline-based systems up to 10000 times.
arXiv Detail & Related papers (2022-10-07T18:17:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.