CellTypeAgent: Trustworthy cell type annotation with Large Language Models
- URL: http://arxiv.org/abs/2505.08844v1
- Date: Tue, 13 May 2025 14:34:11 GMT
- Title: CellTypeAgent: Trustworthy cell type annotation with Large Language Models
- Authors: Jiawen Chen, Jianghao Zhang, Huaxiu Yao, Yun Li,
- Abstract summary: We present a trustworthy large language model (LLM)-agent, CellTypeAgent, which integrates LLMs with verification from relevant databases.<n>We evaluated CellTypeAgent across nine real datasets involving 303 cell types from 36 tissues.
- Score: 21.976870680376358
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cell type annotation is a critical yet laborious step in single-cell RNA sequencing analysis. We present a trustworthy large language model (LLM)-agent, CellTypeAgent, which integrates LLMs with verification from relevant databases. CellTypeAgent achieves higher accuracy than existing methods while mitigating hallucinations. We evaluated CellTypeAgent across nine real datasets involving 303 cell types from 36 tissues. This combined approach holds promise for more efficient and reliable cell type annotation.
Related papers
- Cell-o1: Training LLMs to Solve Single-Cell Reasoning Puzzles with Reinforcement Learning [44.91329557101423]
We introduce the CellPuzzles task, where the objective is to assign unique cell types to a batch of cells.<n>This benchmark spans diverse tissues, diseases, and donor conditions, and requires reasoning across the batch-level cellular context to ensure label uniqueness.<n>We propose Cell-o1, a 7B LLM trained via supervised fine-tuning on distilled reasoning traces, followed by reinforcement learning with batch-level rewards.
arXiv Detail & Related papers (2025-06-03T14:16:53Z) - CellVerse: Do Large Language Models Really Understand Cell Biology? [74.34984441715517]
We introduce CellVerse, a unified language-centric question-answering benchmark that integrates four types of single-cell multi-omics data.<n>We systematically evaluate the performance across 14 open-source and closed-source LLMs ranging from 160M to 671B on CellVerse.
arXiv Detail & Related papers (2025-05-09T06:47:23Z) - scAgent: Universal Single-Cell Annotation via a LLM Agent [21.559055427500642]
scAgent is a universal cell annotation framework based on Large Language Models (LLMs)<n> scAgent can identify cell types and discover novel cell types in diverse tissues.<n> Experimental studies in 160 cell types and 35 tissues demonstrate the superior performance of scAgent.
arXiv Detail & Related papers (2025-04-07T03:03:21Z) - Single-Cell Omics Arena: A Benchmark Study for Large Language Models on Cell Type Annotation Using Single-Cell Data [13.56585855722118]
Large language models (LLMs) have demonstrated their ability to efficiently process and synthesize vast corpora of text to automatically extract biological knowledge.<n>Our study explores the potential of LLMs to accurately classify and annotate cell types in single-cell RNA sequencing (scRNA-seq) data.<n>The results demonstrate that LLMs can provide robust interpretations of single-cell data without requiring additional fine-tuning.
arXiv Detail & Related papers (2024-12-03T23:58:35Z) - Lower-dimensional projections of cellular expression improves cell type classification from single-cell RNA sequencing [12.66369956714212]
Single-cell RNA sequencing (scRNA-seq) enables the study of cellular diversity at single cell level.
Various statistical, machine and deep learning-based methods have been proposed for cell-type classification.
In this work, we proposed a reference-based method for cell type classification, called EnProCell.
arXiv Detail & Related papers (2024-10-13T19:01:38Z) - CellAgent: An LLM-driven Multi-Agent Framework for Automated Single-cell Data Analysis [35.61361183175167]
Single-cell RNA sequencing (scRNA-seq) data analysis is crucial for biological research.
However, manual manipulation of various tools to achieve desired outcomes can be labor-intensive for researchers.
We introduce CellAgent, an LLM-driven multi-agent framework for the automatic processing and execution of scRNA-seq data analysis tasks.
arXiv Detail & Related papers (2024-07-13T09:14:50Z) - scInterpreter: Training Large Language Models to Interpret scRNA-seq
Data for Cell Type Annotation [15.718901418627366]
This research focuses on how to train and adapt the Large Language Model with the capability to interpret and distinguish cell types in single-cell RNA sequencing data.
arXiv Detail & Related papers (2024-02-18T05:39:00Z) - Single-cell Multi-view Clustering via Community Detection with Unknown
Number of Clusters [64.31109141089598]
We introduce scUNC, an innovative multi-view clustering approach tailored for single-cell data.
scUNC seamlessly integrates information from different views without the need for a predefined number of clusters.
We conducted a comprehensive evaluation of scUNC using three distinct single-cell datasets.
arXiv Detail & Related papers (2023-11-28T08:34:58Z) - Mixed Models with Multiple Instance Learning [51.440557223100164]
We introduce MixMIL, a framework integrating Generalized Linear Mixed Models (GLMM) and Multiple Instance Learning (MIL)
Our empirical results reveal that MixMIL outperforms existing MIL models in single-cell datasets.
arXiv Detail & Related papers (2023-11-04T16:42:42Z) - Multi-modal Self-supervised Pre-training for Regulatory Genome Across
Cell Types [75.65676405302105]
We propose a simple yet effective approach for pre-training genome data in a multi-modal and self-supervised manner, which we call GeneBERT.
We pre-train our model on the ATAC-seq dataset with 17 million genome sequences.
arXiv Detail & Related papers (2021-10-11T12:48:44Z) - Learning Contextual Representations for Semantic Parsing with
Generation-Augmented Pre-Training [86.91380874390778]
We present Generation-Augmented Pre-training (GAP), that jointly learns representations of natural language utterances and table schemas by leveraging generation models to generate pre-train data.
Based on experimental results, neural semantics that leverage GAP MODEL obtain new state-of-the-art results on both SPIDER and CRITERIA-TO-generative benchmarks.
arXiv Detail & Related papers (2020-12-18T15:53:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.