Transforming Behavioral Neuroscience Discovery with In-Context Learning and AI-Enhanced Tensor Methods
- URL: http://arxiv.org/abs/2602.17027v1
- Date: Thu, 19 Feb 2026 02:47:46 GMT
- Title: Transforming Behavioral Neuroscience Discovery with In-Context Learning and AI-Enhanced Tensor Methods
- Authors: Paimon Goulart, Jordan Steinhauser, Dawon Ahn, Kylene Shuler, Edward Korzus, Jia Chen, Evangelos E. Papalexakis,
- Abstract summary: We showcase an example AI-enhanced pipeline designed to transform and accelerate the way that the domain experts in the team are able to gain insights out of experimental data.<n>The application at hand is in the domain of behavioral neuroscience, studying fear generalization in mice.<n>We identify the emerging paradigm of "In-Context Learning" (ICL) as a suitable interface for domain experts to automate parts of their pipeline without the need for or familiarity with AI model training and fine-tuning.
- Score: 5.319819085855185
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Scientific discovery pipelines typically involve complex, rigid, and time-consuming processes, from data preparation to analyzing and interpreting findings. Recent advances in AI have the potential to transform such pipelines in a way that domain experts can focus on interpreting and understanding findings, rather than debugging rigid pipelines or manually annotating data. As part of an active collaboration between data science/AI researchers and behavioral neuroscientists, we showcase an example AI-enhanced pipeline, specifically designed to transform and accelerate the way that the domain experts in the team are able to gain insights out of experimental data. The application at hand is in the domain of behavioral neuroscience, studying fear generalization in mice, an important problem whose progress can advance our understanding of clinically significant and often debilitating conditions such as PTSD (Post-Traumatic Stress Disorder). We identify the emerging paradigm of "In-Context Learning" (ICL) as a suitable interface for domain experts to automate parts of their pipeline without the need for or familiarity with AI model training and fine-tuning, and showcase its remarkable efficacy in data preparation and pattern interpretation. Also, we introduce novel AI-enhancements to tensor decomposition model, which allows for more seamless pattern discovery from the heterogeneous data in our application. We thoroughly evaluate our proposed pipeline experimentally, showcasing its superior performance compared to what is standard practice in the domain, as well as against reasonable ML baselines that do not fall under the ICL paradigm, to ensure that we are not compromising performance in our quest for a seamless and easy-to-use interface for domain experts. Finally, we demonstrate effective discovery, with results validated by the domain experts in the team.
Related papers
- Can Agentic AI Match the Performance of Human Data Scientists? [27.236034079837044]
Large language models (LLMs) have significantly automated data science.<n>Can these agentic AI systems truly match the performance of human data scientists?<n>We show that agentic AI that relies on generic analytics workflow falls short of methods that use domain-specific insights.
arXiv Detail & Related papers (2025-12-24T05:31:42Z) - Robust Molecular Property Prediction via Densifying Scarce Labeled Data [53.24886143129006]
In drug discovery, compounds most critical for advancing research often lie beyond the training set.<n>We propose a novel bilevel optimization approach that leverages unlabeled data to interpolate between in-distribution (ID) and out-of-distribution (OOD) data.
arXiv Detail & Related papers (2025-06-13T15:27:40Z) - In-Context Learning for Pure Exploration [28.404325855738502]
We study the problem active sequential hypothesis testing, also known as pure exploration.<n>We introduce In-Context Pure Exploration (ICPE), which meta-trains Transformers to map observation histories to query actions and a predicted hypothesis.<n>ICPE actively gathers evidence on new tasks and infers the true hypothesis without parameter updates.
arXiv Detail & Related papers (2025-06-02T17:04:50Z) - Ensuring Medical AI Safety: Interpretability-Driven Detection and Mitigation of Spurious Model Behavior and Associated Data [14.991686165405959]
We show the applicability of the framework using four medical datasets across two modalities.<n>We successfully identify and unlearn these biases in VGG16, ResNet50, and contemporary Vision Transformer models.
arXiv Detail & Related papers (2025-01-23T16:39:09Z) - SEANN: A Domain-Informed Neural Network for Epidemiological Insights [0.9749638953163389]
We introduce SEANN, a novel approach for informed agnostics that leverages a prevalent form of domain-specific knowledge: Pooled Effect Sizes (PES)<n>PESs are commonly found in published Meta-Analysis studies, in different forms, and represent a quantitative form of a scientific consensus.<n>We experimentally demonstrate significant improvements in the generalizability of predictive performances and the scientific plausibility of extracted relationships.
arXiv Detail & Related papers (2025-01-17T16:01:05Z) - Vital Insight: Assisting Experts' Context-Driven Sensemaking of Multi-modal Personal Tracking Data Using Visualization and Human-In-The-Loop LLM [35.00287513005424]
Vital Insight is a novel, LLM-assisted, prototype system to enable human-in-the-loop inference (sensemaking) and visualizations of multi-modal passive sensing data from smartphones and wearables.<n>We observe experts' interactions with it and develop an expert sensemaking model that explains how experts move between direct data representations and AI-supported inferences.
arXiv Detail & Related papers (2024-10-18T21:56:35Z) - Enhancing Explainability in Mobility Data Science through a combination
of methods [0.08192907805418582]
This paper introduces a comprehensive framework that harmonizes pivotal XAI techniques.
LIMEInterpretable Model-a-gnostic Explanations, SHAP, Saliency maps, attention mechanisms, direct trajectory visualization, and Permutation Feature (PFI)
To validate our framework, we undertook a survey to gauge preferences and reception among various user demographics.
arXiv Detail & Related papers (2023-12-01T07:09:21Z) - Clairvoyance: A Pipeline Toolkit for Medical Time Series [95.22483029602921]
Time-series learning is the bread and butter of data-driven *clinical decision support*
Clairvoyance proposes a unified, end-to-end, autoML-friendly pipeline that serves as a software toolkit.
Clairvoyance is the first to demonstrate viability of a comprehensive and automatable pipeline for clinical time-series ML.
arXiv Detail & Related papers (2023-10-28T12:08:03Z) - A Deep Learning Approach to Analyzing Continuous-Time Systems [20.89961728689037]
We show that deep learning can be used to analyze complex processes.
Our approach relaxes standard assumptions that are implausible for many natural systems.
We demonstrate substantial improvements on behavioral and neuroimaging data.
arXiv Detail & Related papers (2022-09-25T03:02:31Z) - Learning domain-specific causal discovery from time series [7.298647409503783]
Causal discovery from time-varying data is important in neuroscience, medicine, and machine learning.
Human expertise is often not entirely accurate and tends to be outperformed in domains with abundant data.
In this study, we examine whether we can enhance domain-specific causal discovery for time series using a data-driven approach.
arXiv Detail & Related papers (2022-09-12T20:32:39Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z) - Uncovering the structure of clinical EEG signals with self-supervised
learning [64.4754948595556]
Supervised learning paradigms are often limited by the amount of labeled data that is available.
This phenomenon is particularly problematic in clinically-relevant data, such as electroencephalography (EEG)
By extracting information from unlabeled data, it might be possible to reach competitive performance with deep neural networks.
arXiv Detail & Related papers (2020-07-31T14:34:47Z) - Provably Efficient Causal Reinforcement Learning with Confounded
Observational Data [135.64775986546505]
We study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting.
We propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner.
arXiv Detail & Related papers (2020-06-22T14:49:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.