ST-DAI: Single-shot 2.5D Spatial Transcriptomics with Intra-Sample Domain Adaptive Imputation for Cost-efficient 3D Reconstruction
- URL: http://arxiv.org/abs/2507.21516v1
- Date: Tue, 29 Jul 2025 05:46:37 GMT
- Title: ST-DAI: Single-shot 2.5D Spatial Transcriptomics with Intra-Sample Domain Adaptive Imputation for Cost-efficient 3D Reconstruction
- Authors: Jiahe Qian, Yaoyu Fang, Xinkun Wang, Lee A. Cooper, Bo Zhou,
- Abstract summary: We introduce ST-DAI, a single-shot framework for 3D transcriptomics that couples a cost-efficient 2.5D sampling scheme with an intra-sample domain-adaptive imputation framework.<n>First, in the cost-efficient 2.5D sampling stage, one reference section (central section) is fully sampled while other sections (adjacent sections) are sparsely sampled.<n>Second, we propose a single-shot 3D imputation learning method that allows us to generate fully sampled 3D ST from this cost-efficient 2.5D ST scheme.
- Score: 1.7603474309877931
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For 3D spatial transcriptomics (ST), the high per-section acquisition cost of fully sampling every tissue section remains a significant challenge. Although recent approaches predict gene expression from histology images, these methods require large external datasets, which leads to high-cost and suffers from substantial domain discrepancies that lead to poor generalization on new samples. In this work, we introduce ST-DAI, a single-shot framework for 3D ST that couples a cost-efficient 2.5D sampling scheme with an intra-sample domain-adaptive imputation framework. First, in the cost-efficient 2.5D sampling stage, one reference section (central section) is fully sampled while other sections (adjacent sections) is sparsely sampled, thereby capturing volumetric context at significantly reduced experimental cost. Second, we propose a single-shot 3D imputation learning method that allows us to generate fully sampled 3D ST from this cost-efficient 2.5D ST scheme, using only sample-specific training. We observe position misalignment and domain discrepancy between sections. To address those issues, we adopt a pipeline that first aligns the central section to the adjacent section, thereafter generates dense pseudo-supervision on the central section, and then performs Fast Multi-Domain Refinement (FMDR), which adapts the network to the domain of the adjacent section while fine-tuning only a few parameters through the use of Parameter-Efficient Domain-Alignment Layers (PDLs). During this refinement, a Confidence Score Generator (CSG) reweights the pseudo-labels according to their estimated reliability, thereby directing imputation toward trustworthy regions. Our experimental results demonstrate that ST-DAI achieves gene expression prediction performance comparable to fully sampled approaches while substantially reducing the measurement burden.
Related papers
- FastRef:Fast Prototype Refinement for Few-Shot Industrial Anomaly Detection [18.487111110151115]
Few-shot industrial anomaly detection (FS-IAD) presents a critical challenge for practical automated inspection systems.<n>We propose FastRef, a novel and efficient prototype refinement framework for FS-IAD.<n>For comprehensive evaluation, we integrate FastRef with three competitive prototype-based FS-IAD methods: PatchCore, FastRecon, WinCLIP, and AnomalyDINO.
arXiv Detail & Related papers (2025-06-26T15:46:28Z) - Progressive Multi-Level Alignments for Semi-Supervised Domain Adaptation SAR Target Recognition Using Simulated Data [3.1951121258423334]
We develop an instance-prototype alignment (AIPA) strategy to push the source domain instances close to the corresponding target prototypes.
We also develop an instance-prototype alignment (AIPA) strategy to push the source domain instances close to the corresponding target prototypes.
arXiv Detail & Related papers (2024-11-07T13:53:13Z) - SITCOM: Step-wise Triple-Consistent Diffusion Sampling for Inverse Problems [14.2814208019426]
Diffusion models (DMs) are a class of generative models that allow sampling from a distribution learned over a training set.<n>We state three conditions for achieving measurement-consistent diffusion trajectories.<n>We propose a new optimization-based sampling method that not only enforces standard data manifold measurement consistency and forward diffusion consistency, but also incorporates our proposed step-wise and network-regularized backward diffusion consistency.
arXiv Detail & Related papers (2024-10-06T13:39:36Z) - Low Saturation Confidence Distribution-based Test-Time Adaptation for Cross-Domain Remote Sensing Image Classification [4.7514513970228425]
Unsupervised Domain Adaptation (UDA) has emerged as a powerful technique for addressing the distribution shift across various Remote Sensing (RS) applications.<n>Most UDA approaches require access to source data, which may be infeasible due to data privacy or transmission constraints.<n>Low Saturation Confidence Distribution Test-Time Adaptation (D-TTA) marketing the first attempt to explore Test-Time Adaptation for cross-domain RS image classification.
arXiv Detail & Related papers (2024-08-29T05:04:25Z) - S^2Former-OR: Single-Stage Bi-Modal Transformer for Scene Graph Generation in OR [50.435592120607815]
Scene graph generation (SGG) of surgical procedures is crucial in enhancing holistically cognitive intelligence in the operating room (OR)
Previous works have primarily relied on multi-stage learning, where the generated semantic scene graphs depend on intermediate processes with pose estimation and object detection.
In this study, we introduce a novel single-stage bi-modal transformer framework for SGG in the OR, termed S2Former-OR.
arXiv Detail & Related papers (2024-02-22T11:40:49Z) - Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - Uncertainty-Aware Adaptation for Self-Supervised 3D Human Pose
Estimation [70.32536356351706]
We introduce MRP-Net that constitutes a common deep network backbone with two output heads subscribing to two diverse configurations.
We derive suitable measures to quantify prediction uncertainty at both pose and joint level.
We present a comprehensive evaluation of the proposed approach and demonstrate state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-03-29T07:14:58Z) - Using the Order of Tomographic Slices as a Prior for Neural Networks
Pre-Training [1.1470070927586016]
We propose a pre-training method SortingLoss on slices instead of volumes.
It performs pre-training on slices instead of volumes, so that a model could be fine-tuned on a sparse set of slices.
We show that the proposed method performs on par with SimCLR, while working 2x faster and requiring 1.5x less memory.
arXiv Detail & Related papers (2022-03-17T14:58:15Z) - Dispensed Transformer Network for Unsupervised Domain Adaptation [21.256375606219073]
A novel unsupervised domain adaptation (UDA) method named dispensed Transformer network (DTNet) is introduced in this paper.
Our proposed network achieves the best performance in comparison with several state-of-the-art techniques.
arXiv Detail & Related papers (2021-10-28T08:27:44Z) - SCPM-Net: An Anchor-free 3D Lung Nodule Detection Network using Sphere
Representation and Center Points Matching [47.79483848496141]
We propose a 3D sphere representation-based center-points matching detection network (SCPM-Net)
It is anchor-free and automatically predicts the position, radius, and offset of nodules without the manual design of nodule/anchor parameters.
We show that our proposed SCPM-Net framework achieves superior performance compared with existing used anchor-based and anchor-free methods for lung nodule detection.
arXiv Detail & Related papers (2021-04-12T05:51:29Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z) - SADet: Learning An Efficient and Accurate Pedestrian Detector [68.66857832440897]
This paper proposes a series of systematic optimization strategies for the detection pipeline of one-stage detector.
It forms a single shot anchor-based detector (SADet) for efficient and accurate pedestrian detection.
Though structurally simple, it presents state-of-the-art result and real-time speed of $20$ FPS for VGA-resolution images.
arXiv Detail & Related papers (2020-07-26T12:32:38Z) - 3DSSD: Point-based 3D Single Stage Object Detector [61.67928229961813]
We present a point-based 3D single stage object detector, named 3DSSD, achieving a good balance between accuracy and efficiency.
Our method outperforms all state-of-the-art voxel-based single stage methods by a large margin, and has comparable performance to two stage point-based methods as well.
arXiv Detail & Related papers (2020-02-24T12:01:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.