AISAC: An Integrated multi-agent System for Transparent, Retrieval-Grounded Scientific Assistance
- URL: http://arxiv.org/abs/2511.14043v1
- Date: Tue, 18 Nov 2025 01:51:05 GMT
- Title: AISAC: An Integrated multi-agent System for Transparent, Retrieval-Grounded Scientific Assistance
- Authors: Chandrachur Bhattacharya, Sibendu Som,
- Abstract summary: AISAC builds on technologies - LangGraph for orchestration, FAISS for vector search, and persistence.<n>System implements prompt-engineered agents coordinated via LangGraph's StatePlanner.<n>A project-driven bootstrap allows research teams to customize tools, prompts, and data sources without modifying core code.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI Scientific Assistant Core (AISAC) is an integrated multi-agent system developed at Argonne National Laboratory for scientific and engineering workflows. AISAC builds on established technologies - LangGraph for orchestration, FAISS for vector search, and SQLite for persistence - and integrates them into a unified system prototype focused on transparency, provenance tracking, and scientific adaptability. The system implements a Router-Planner-Coordinator workflow and an optional Evaluator role, using prompt-engineered agents coordinated via LangGraph's StateGraph and supported by helper agents such as a Researcher. Each role is defined through custom system prompts that enforce structured JSON outputs. A hybrid memory approach (FAISS + SQLite) enables both semantic retrieval and structured conversation history. An incremental indexing strategy based on file hashing minimizes redundant re-embedding when scientific corpora evolve. A configuration-driven project bootstrap layer allows research teams to customize tools, prompts, and data sources without modifying core code. All agent decisions, tool invocations, and retrievals are logged and visualized through a custom Gradio interface, providing step-by-step transparency for each reasoning episode. The authors have applied AISAC to multiple research areas at Argonne, including specialized deployments for waste-to-products research and energy process safety, as well as general-purpose scientific assistance, demonstrating its cross-domain applicability.
Related papers
- AIRS-Bench: a Suite of Tasks for Frontier AI Research Science Agents [49.67355440164857]
We introduce AIRS-Bench, a suite of 20 tasks sourced from state-of-the-art machine learning papers.<n>Airs-Bench tasks assess agentic capabilities over the full research lifecycle.<n>We open-source the AIRS-Bench task definitions and evaluation code to catalyze further development in autonomous scientific research.
arXiv Detail & Related papers (2026-02-06T16:45:02Z) - FROAV: A Framework for RAG Observation and Agent Verification - Lowering the Barrier to LLM Agent Research [0.5729426778193398]
We present FROAV, an open-source research platform that democratizes Large Language Models (LLMs) agent research.<n>FROAV implements a multi-stage Retrieval-Augmented Generation (RAG) pipeline and a rigorous "LLM-as-a-Judge" evaluation system.<n>Our framework integrates n8n for no-code workflow design, FastAPI for flexible backend logic, and Streamlit for human-in-the-loop interaction.
arXiv Detail & Related papers (2026-01-12T13:02:32Z) - A Hierarchical Tree-based approach for creating Configurable and Static Deep Research Agent (Static-DRA) [0.0]
This paper introduces the Static Deep Research Agent (Static-DRA), a novel solution built upon a hierarchical Tree-based static workflow.<n>The core contribution is the integration of two user-tunable parameters, Depth and Breadth, which provide granular control over the research intensity.<n>The agent's architecture, comprising Supervisor, Independent, and Worker agents, facilitates effective multi-hop information retrieval.
arXiv Detail & Related papers (2025-12-03T15:37:13Z) - Simple Agents Outperform Experts in Biomedical Imaging Workflow Optimization [69.36509281190662]
Adapting production-level computer vision tools to bespoke scientific datasets is a critical "last mile" bottleneck.<n>We consider using AI agents to automate this manual coding, and focus on the open question of optimal agent design.<n>We demonstrate that a simple agent framework consistently generates adaptation code that outperforms human-expert solutions.
arXiv Detail & Related papers (2025-12-02T18:42:26Z) - Enterprise Deep Research: Steerable Multi-Agent Deep Research for Enterprise Analytics [75.4712507893024]
Enterprise Deep Research (EDR) is a multi-agent system that integrates a Master Planning Agent for adaptive query decomposition.<n>Four specialized search agents (General, Academic, GitHub, LinkedIn) and a visualization agent for data-driven insights are also included.<n>EDR reflects research direction with optional human-in-the-loop steering guidance.
arXiv Detail & Related papers (2025-10-20T17:55:11Z) - Build Your Personalized Research Group: A Multiagent Framework for Continual and Interactive Science Automation [41.659285482346235]
We present texttfreephdlabor, an open-source multiagent framework featuring textitfully dynamic determined by real-time agent reasoning.<n>The framework provides comprehensive infrastructure including textitautomatic context compaction, textitworkspace-based communication to prevent information degradation, textitmemory persistence across sessions, and textitnon-blocking human intervention mechanisms.
arXiv Detail & Related papers (2025-10-17T13:13:32Z) - Spec-Driven AI for Science: The ARIA Framework for Automated and Reproducible Data Analysis [23.28226188948918]
ARIA is a spec-driven, human-in-the-loop framework for automated and interpretable data analysis.<n>ARIA integrates six layers, namely Command, Context, Code, Data, Orchestration, and AI Module.<n>ARIA establishes a new paradigm for transparent, collaborative, and reproducible scientific discovery.
arXiv Detail & Related papers (2025-10-13T08:32:43Z) - Context Engineering for Multi-Agent LLM Code Assistants Using Elicit, NotebookLM, ChatGPT, and Claude Code [0.0]
Large Language Models (LLMs) have shown promise in automating code generation and software engineering tasks, yet they often struggle with complex, multi-file projects due to context limitations and knowledge gaps.<n>We propose a novel context engineering workflow that combines multiple AI components: an Intent Translator (GPT-5) for clarifying user requirements, an Elicit-powered semantic literature retrieval for injecting domain knowledge, and a NotebookLM-based document synthesis for contextual understanding, and a Claude Code multi-agent system for code generation and validation.
arXiv Detail & Related papers (2025-08-09T14:45:53Z) - OS Agents: A Survey on MLLM-based Agents for General Computing Devices Use [101.57043903478257]
The dream to create AI assistants as capable and versatile as the fictional J.A.R.V.I.S from Iron Man has long captivated imaginations.<n>With the evolution of (multi-modal) large language models ((M)LLMs), this dream is closer to reality.<n>This survey aims to consolidate the state of OS Agents research, providing insights to guide both academic inquiry and industrial development.
arXiv Detail & Related papers (2025-08-06T14:33:45Z) - A Survey on Code Generation with LLM-based Agents [61.474191493322415]
Code generation agents powered by large language models (LLMs) are revolutionizing the software development paradigm.<n>LLMs are characterized by three core features.<n>This paper presents a systematic survey of the field of LLM-based code generation agents.
arXiv Detail & Related papers (2025-07-31T18:17:36Z) - Deep Research Agents: A Systematic Examination And Roadmap [109.53237992384872]
Deep Research (DR) agents are designed to tackle complex, multi-turn informational research tasks.<n>In this paper, we conduct a detailed analysis of the foundational technologies and architectural components that constitute DR agents.
arXiv Detail & Related papers (2025-06-22T16:52:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.