Autonomous Quantum Simulation through Large Language Model Agents
- URL: http://arxiv.org/abs/2601.10194v1
- Date: Thu, 15 Jan 2026 08:50:57 GMT
- Title: Autonomous Quantum Simulation through Large Language Model Agents
- Authors: Weitang Li, Jiajun Ren, Lixue Cheng, Cunxi Gong,
- Abstract summary: Large language model (LLM) agents can autonomously perform tensor network simulations of quantum many-body systems.<n>We create autonomous AI agents that can be trained in specialized computational domains within minutes.
- Score: 0.29165586612027233
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We demonstrate that large language model (LLM) agents can autonomously perform tensor network simulations of quantum many-body systems, achieving approximately 90% success rate across representative benchmark tasks. Tensor network methods are powerful tools for quantum simulation, but their effective use requires expertise typically acquired through years of graduate training. By combining in-context learning with curated documentation and multi-agent decomposition, we create autonomous AI agents that can be trained in specialized computational domains within minutes. We benchmark three configurations (baseline, single-agent with in-context learning, and multi-agent with in-context learning) on problems spanning quantum phase transitions, open quantum system dynamics, and photochemical reactions. Systematic evaluation using DeepSeek-V3.2, Gemini 2.5 Pro, and Claude Opus 4.5 demonstrates that both in-context learning and multi-agent architecture are essential. Analysis of failure modes reveals characteristic patterns across models, with the multi-agent configuration substantially reducing implementation errors and hallucinations compared to simpler architectures.
Related papers
- MedSAM-Agent: Empowering Interactive Medical Image Segmentation with Multi-turn Agentic Reinforcement Learning [53.37068897861388]
MedSAM-Agent is a framework that reformulates interactive segmentation as a multi-step autonomous decision-making process.<n>We develop a two-stage training pipeline that integrates multi-turn, end-to-end outcome verification.<n>Experiments across 6 medical modalities and 21 datasets demonstrate that MedSAM-Agent achieves state-of-the-art performance.
arXiv Detail & Related papers (2026-02-03T09:47:49Z) - Youtu-LLM: Unlocking the Native Agentic Potential for Lightweight Large Language Models [78.73992315826035]
We introduce Youtu-LLM, a lightweight language model that harmonizes high computational efficiency with native agentic intelligence.<n>Youtu-LLM is pre-trained from scratch to systematically cultivate reasoning and planning capabilities.
arXiv Detail & Related papers (2025-12-31T04:25:11Z) - An Agentic Framework for Autonomous Materials Computation [70.24472585135929]
Large Language Models (LLMs) have emerged as powerful tools for accelerating scientific discovery.<n>Recent advances integrate LLMs into agentic frameworks, enabling retrieval, reasoning, and tool use for complex scientific experiments.<n>Here, we present a domain-specialized agent designed for reliable automation of first-principles materials computations.
arXiv Detail & Related papers (2025-12-22T15:03:57Z) - Machine Unlearning in the Era of Quantum Machine Learning: An Empirical Study [22.101976874889147]
We present the first comprehensive empirical study of machine unlearning (MU) in hybrid quantum-classical neural networks.<n>We adapt a broad suite of unlearning methods to quantum settings, including gradient-based, distillation-based, regularization-based and certified techniques.<n>We find that quantum models can support effective unlearning, but outcomes depend strongly on circuit depth, entanglement structure, and task complexity.
arXiv Detail & Related papers (2025-12-22T10:40:03Z) - LoCoBench-Agent: An Interactive Benchmark for LLM Agents in Long-Context Software Engineering [90.84806758077536]
We introduce textbfLoCoBench-Agent, a comprehensive evaluation framework specifically designed to assess large language models (LLMs) agents in realistic, long-context software engineering.<n>Our framework extends LoCoBench's 8,000 scenarios into interactive agent environments, enabling systematic evaluation of multi-turn conversations.<n>Our framework provides agents with 8 specialized tools (file operations, search, code analysis) and evaluates them across context lengths ranging from 10K to 1M tokens.
arXiv Detail & Related papers (2025-11-17T23:57:24Z) - El Agente: An Autonomous Agent for Quantum Chemistry [3.6593051631801106]
El Agente Q is a multi-agent system that generates and executes quantum chemistry from natural language user prompts.<n>El Agente Q is benchmarked on six university-level course exercises and two case studies, demonstrating robust problem-solving performance.
arXiv Detail & Related papers (2025-05-05T09:07:22Z) - Rapid and Automated Alloy Design with Graph Neural Network-Powered LLM-Driven Multi-Agent Systems [0.0]
A multi-agent AI model is used to automate the discovery of new metallic alloys.
We focus on the NbMoTa family of body-centered cubic (bcc) alloys, modeled using an ML-based interatomic potential.
By synergizing the predictive power of GNNs with the dynamic collaboration of LLM-based agents, the system autonomously navigates vast alloy design spaces.
arXiv Detail & Related papers (2024-10-17T17:06:26Z) - Towards Multi-Agent Reinforcement Learning using Quantum Boltzmann
Machines [2.015864965523243]
We propose an extension to the original concept in order to solve more challenging problems.
We add an experience replay buffer and use different networks for approximating the target and policy values.
Quantum sampling proves to be a promising method for reinforcement learning tasks, but is currently limited by the QPU size.
arXiv Detail & Related papers (2021-09-22T17:59:24Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - UPDeT: Universal Multi-agent Reinforcement Learning via Policy
Decoupling with Transformers [108.92194081987967]
We make the first attempt to explore a universal multi-agent reinforcement learning pipeline, designing one single architecture to fit tasks.
Unlike previous RNN-based models, we utilize a transformer-based model to generate a flexible policy.
The proposed model, named as Universal Policy Decoupling Transformer (UPDeT), further relaxes the action restriction and makes the multi-agent task's decision process more explainable.
arXiv Detail & Related papers (2021-01-20T07:24:24Z) - Towards Understanding Cooperative Multi-Agent Q-Learning with Value
Factorization [28.89692989420673]
We formalize a multi-agent fitted Q-iteration framework for analyzing factorized multi-agent Q-learning.
Through further analysis, we find that on-policy training or richer joint value function classes can improve its local or global convergence properties.
arXiv Detail & Related papers (2020-05-31T19:14:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.