Chat2Brain: A Method for Mapping Open-Ended Semantic Queries to Brain
Activation Maps
- URL: http://arxiv.org/abs/2309.05021v1
- Date: Sun, 10 Sep 2023 13:06:45 GMT
- Title: Chat2Brain: A Method for Mapping Open-Ended Semantic Queries to Brain
Activation Maps
- Authors: Yaonai Wei, Tuo Zhang, Han Zhang, Tianyang Zhong, Lin Zhao, Zhengliang
Liu, Chong Ma, Songyao Zhang, Muheng Shang, Lei Du, Xiao Li, Tianming Liu and
Junwei Han
- Abstract summary: We propose a method called Chat2Brain that combines LLMs to basic text-2-image model, known as Text2Brain, to map semantic queries to brain activation maps.
We demonstrate that Chat2Brain can synthesize plausible neural activation patterns for more complex tasks of text queries.
- Score: 59.648646222905235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over decades, neuroscience has accumulated a wealth of research results in
the text modality that can be used to explore cognitive processes.
Meta-analysis is a typical method that successfully establishes a link from
text queries to brain activation maps using these research results, but it
still relies on an ideal query environment. In practical applications, text
queries used for meta-analyses may encounter issues such as semantic redundancy
and ambiguity, resulting in an inaccurate mapping to brain images. On the other
hand, large language models (LLMs) like ChatGPT have shown great potential in
tasks such as context understanding and reasoning, displaying a high degree of
consistency with human natural language. Hence, LLMs could improve the
connection between text modality and neuroscience, resolving existing
challenges of meta-analyses. In this study, we propose a method called
Chat2Brain that combines LLMs to basic text-2-image model, known as Text2Brain,
to map open-ended semantic queries to brain activation maps in data-scarce and
complex query environments. By utilizing the understanding and reasoning
capabilities of LLMs, the performance of the mapping model is optimized by
transferring text queries to semantic queries. We demonstrate that Chat2Brain
can synthesize anatomically plausible neural activation patterns for more
complex tasks of text queries.
Related papers
- Emotion-Aware Response Generation Using Affect-Enriched Embeddings with LLMs [0.585143166250719]
This study addresses the challenge of enhancing the emotional and contextual understanding of large language models (LLMs) in psychiatric applications.
We introduce a novel framework that integrates multiple emotion lexicons, with state-of-the-art LLMs such as LLAMA 2, Flan-T5, ChatGPT 3.0, and ChatGPT 4.0.
The primary dataset comprises over 2,000 therapy session transcripts from the Counseling and Psychotherapy database, covering discussions on anxiety, depression, trauma, and addiction.
arXiv Detail & Related papers (2024-10-02T08:01:05Z) - H-STAR: LLM-driven Hybrid SQL-Text Adaptive Reasoning on Tables [56.73919743039263]
This paper introduces a novel algorithm that integrates both symbolic and semantic (textual) approaches in a two-stage process to address limitations.
Our experiments demonstrate that H-STAR significantly outperforms state-of-the-art methods across three question-answering (QA) and fact-verification datasets.
arXiv Detail & Related papers (2024-06-29T21:24:19Z) - Crafting Interpretable Embeddings by Asking LLMs Questions [89.49960984640363]
Large language models (LLMs) have rapidly improved text embeddings for a growing array of natural-language processing tasks.
We introduce question-answering embeddings (QA-Emb), embeddings where each feature represents an answer to a yes/no question asked to an LLM.
We use QA-Emb to flexibly generate interpretable models for predicting fMRI voxel responses to language stimuli.
arXiv Detail & Related papers (2024-05-26T22:30:29Z) - Probing Brain Context-Sensitivity with Masked-Attention Generation [87.31930367845125]
We use GPT-2 transformers to generate word embeddings that capture a fixed amount of contextual information.
We then tested whether these embeddings could predict fMRI brain activity in humans listening to naturalistic text.
arXiv Detail & Related papers (2023-05-23T09:36:21Z) - ChatABL: Abductive Learning via Natural Language Interaction with
ChatGPT [72.83383437501577]
Large language models (LLMs) have recently demonstrated significant potential in mathematical abilities.
LLMs currently have difficulty in bridging perception, language understanding and reasoning capabilities.
This paper presents a novel method for integrating LLMs into the abductive learning framework.
arXiv Detail & Related papers (2023-04-21T16:23:47Z) - MURMUR: Modular Multi-Step Reasoning for Semi-Structured Data-to-Text
Generation [102.20036684996248]
We propose MURMUR, a neuro-symbolic modular approach to text generation from semi-structured data with multi-step reasoning.
We conduct experiments on two data-to-text generation tasks like WebNLG and LogicNLG.
arXiv Detail & Related papers (2022-12-16T17:36:23Z) - A Transformer-based Neural Language Model that Synthesizes Brain
Activation Maps from Free-Form Text Queries [37.322245313730654]
Text2Brain is an easy to use tool for synthesizing brain activation maps from open-ended text queries.
Text2Brain was built on a transformer-based neural network language model and a coordinate-based meta-analysis of neuroimaging studies.
arXiv Detail & Related papers (2022-07-24T09:15:03Z) - Emotion Recognition in Conversation using Probabilistic Soft Logic [17.62924003652853]
emotion recognition in conversation (ERC) is a sub-field of emotion recognition that focuses on conversations that contain two or more utterances.
We implement our approach in a framework called Probabilistic Soft Logic (PSL), a declarative templating language.
PSL provides functionality for the incorporation of results from neural models into PSL models.
We compare our method with state-of-the-art purely neural ERC systems, and see almost a 20% improvement.
arXiv Detail & Related papers (2022-07-14T23:59:06Z) - Text2Brain: Synthesis of Brain Activation Maps from Free-form Text Query [28.26166305556377]
Text2Brain is a neural network approach for coordinate-based meta-analysis of neuroimaging studies.
We show that Text2Brain can synthesize anatomically-plausible neural activation patterns from free-form textual descriptions.
arXiv Detail & Related papers (2021-09-28T15:39:22Z) - Brain2Word: Decoding Brain Activity for Language Generation [14.24200473508597]
We present a model that can decode fMRI data from unseen subjects.
Our model achieves 5.22% Top-1 and 13.59% Top-5 accuracy in this challenging task.
arXiv Detail & Related papers (2020-09-10T10:47:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.