NeuroLit Navigator: A Neurosymbolic Approach to Scholarly Article Searches for Systematic Reviews
- URL: http://arxiv.org/abs/2503.00278v1
- Date: Sat, 01 Mar 2025 01:11:24 GMT
- Title: NeuroLit Navigator: A Neurosymbolic Approach to Scholarly Article Searches for Systematic Reviews
- Authors: Vedant Khandelwal, Kaushik Roy, Valerie Lookingbill, Ritvik Garimella, Harshul Surana, Heather Heckman, Amit Sheth,
- Abstract summary: NeuroLit Navigator'' combines domain-specific LLMs with structured knowledge sources like Medical Subject Headings (MeSH) and the Unified Medical Language System (UMLS)<n>This integration enhances query formulation, expands search vocabularies, and deepens search scopes, enabling more precise searches.<n>NeuroLit Navigator has reduced the time required for initial literature searches by 90%.
- Score: 15.32315124754677
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The introduction of Large Language Models (LLMs) has significantly impacted various fields, including education, for example, by enabling the creation of personalized learning materials. However, their use in Systematic Reviews (SRs) reveals limitations such as restricted access to specialized vocabularies, lack of domain-specific reasoning, and a tendency to generate inaccurate information. Existing SR tools often rely on traditional NLP methods and fail to address these issues adequately. To overcome these challenges, we developed the ``NeuroLit Navigator,'' a system that combines domain-specific LLMs with structured knowledge sources like Medical Subject Headings (MeSH) and the Unified Medical Language System (UMLS). This integration enhances query formulation, expands search vocabularies, and deepens search scopes, enabling more precise searches. Deployed in multiple universities and tested by over a dozen librarians, the NeuroLit Navigator has reduced the time required for initial literature searches by 90\%. Despite this efficiency, the initial set of articles retrieved can vary in relevance and quality. Nonetheless, the system has greatly improved the reproducibility of search results, demonstrating its potential to support librarians in the SR process.
Related papers
- Introducing ORKG ASK: an AI-driven Scholarly Literature Search and Exploration System Taking a Neuro-Symbolic Approach [4.4684259220459035]
ASK (Assistant for Scientific Knowledge) is an AI-driven scholarly literature search and exploration system.<n>The system allows users to input research questions in natural language and retrieve relevant articles.<n>It automatically extracts key information and generates answers to research questions using a Retrieval-Augmented Generation (RAG) approach.
arXiv Detail & Related papers (2025-12-18T11:25:14Z) - How Do LLM-Generated Texts Impact Term-Based Retrieval Models? [76.92519309816008]
This paper investigates the influence of large language models (LLMs) on term-based retrieval models.<n>Our linguistic analysis reveals that LLM-generated texts exhibit smoother high-frequency and steeper low-frequency Zipf slopes.<n>Our study further explores whether term-based retrieval models demonstrate source bias, concluding that these models prioritize documents whose term distributions closely correspond to those of the queries.
arXiv Detail & Related papers (2025-08-25T06:43:27Z) - Context-Aware Scientific Knowledge Extraction on Linked Open Data using Large Language Models [0.0]
This paper introduces WISE (Workflow for Intelligent Scientific Knowledge Extraction), a system to extract, refine, and rank query-specific knowledge.<n>WISE delivers detailed, organized answers by systematically exploring and synthesizing knowledge from diverse sources.
arXiv Detail & Related papers (2025-06-21T04:22:34Z) - NANOGPT: A Query-Driven Large Language Model Retrieval-Augmented Generation System for Nanotechnology Research [7.520798704421448]
Large Language Model Retrieval-Augmented Generation (LLM-RAG) system tailored for nanotechnology research.<n>System retrieves relevant literature by utilizing Google Scholar's advanced search, and scraping open-access papers from Elsevier, Springer Nature, and ACS Publications.
arXiv Detail & Related papers (2025-02-27T21:40:22Z) - Scholar Name Disambiguation with Search-enhanced LLM Across Language [0.2302001830524133]
This paper proposes a novel approach by leveraging search-enhanced language models across multiple languages to improve name disambiguation.
By utilizing the powerful query rewriting, intent recognition, and data indexing capabilities of search engines, our method can gather richer information for distinguishing between entities and extracting profiles.
arXiv Detail & Related papers (2024-11-26T04:39:46Z) - Knowledge Tagging with Large Language Model based Multi-Agent System [17.53518487546791]
This paper investigates the use of a multi-agent system to address the limitations of previous algorithms.<n>We highlight the significant potential of an LLM-based multi-agent system in overcoming the challenges that previous methods have encountered.
arXiv Detail & Related papers (2024-09-12T21:39:01Z) - Retrieval-Enhanced Machine Learning: Synthesis and Opportunities [60.34182805429511]
Retrieval-enhancement can be extended to a broader spectrum of machine learning (ML)
This work introduces a formal framework of this paradigm, Retrieval-Enhanced Machine Learning (REML), by synthesizing the literature in various domains in ML with consistent notations which is missing from the current literature.
The goal of this work is to equip researchers across various disciplines with a comprehensive, formally structured framework of retrieval-enhanced models, thereby fostering interdisciplinary future research.
arXiv Detail & Related papers (2024-07-17T20:01:21Z) - A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models [71.25225058845324]
Large Language Models (LLMs) have demonstrated revolutionary abilities in language understanding and generation.
Retrieval-Augmented Generation (RAG) can offer reliable and up-to-date external knowledge.
RA-LLMs have emerged to harness external and authoritative knowledge bases, rather than relying on the model's internal knowledge.
arXiv Detail & Related papers (2024-05-10T02:48:45Z) - Had enough of experts? Quantitative knowledge retrieval from large language models [4.091195951668217]
Large language models (LLMs) have been extensively studied for their abilities to generate convincing natural language sequences.<n>We introduce a framework that leverages LLMs to enhance Bayesian models by eliciting expert-like prior knowledge and imputing missing data.
arXiv Detail & Related papers (2024-02-12T16:32:37Z) - DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - Large Search Model: Redefining Search Stack in the Era of LLMs [63.503320030117145]
We introduce a novel conceptual framework called large search model, which redefines the conventional search stack by unifying search tasks with one large language model (LLM)
All tasks are formulated as autoregressive text generation problems, allowing for the customization of tasks through the use of natural language prompts.
This proposed framework capitalizes on the strong language understanding and reasoning capabilities of LLMs, offering the potential to enhance search result quality while simultaneously simplifying the existing cumbersome search stack.
arXiv Detail & Related papers (2023-10-23T05:52:09Z) - Large Language Models for Information Retrieval: A Survey [58.30439850203101]
Information retrieval has evolved from term-based methods to its integration with advanced neural models.
Recent research has sought to leverage large language models (LLMs) to improve IR systems.
We delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers.
arXiv Detail & Related papers (2023-08-14T12:47:22Z) - Synergistic Interplay between Search and Large Language Models for
Information Retrieval [141.18083677333848]
InteR allows RMs to expand knowledge in queries using LLM-generated knowledge collections.
InteR achieves overall superior zero-shot retrieval performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-12T11:58:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.