Exploring Accurate and Transparent Domain Adaptation in Predictive Healthcare via Concept-Grounded Orthogonal Inference
- URL: http://arxiv.org/abs/2602.12542v1
- Date: Fri, 13 Feb 2026 02:46:50 GMT
- Title: Exploring Accurate and Transparent Domain Adaptation in Predictive Healthcare via Concept-Grounded Orthogonal Inference
- Authors: Pengfei Hu, Chang Lu, Feifan Liu, Yue Ning,
- Abstract summary: ExtraCare decomposes patient representations into invariant and covariant components.<n>It offers human-understandable explanations by mapping sparse latent dimensions to medical concepts.<n>ExtraCare is evaluated on two real-world EHR datasets across multiple domain partition settings.
- Score: 7.191139788777488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models for clinical event prediction on electronic health records (EHR) often suffer performance degradation when deployed under different data distributions. While domain adaptation (DA) methods can mitigate such shifts, its "black-box" nature prevents widespread adoption in clinical practice where transparency is essential for trust and safety. We propose ExtraCare to decompose patient representations into invariant and covariant components. By supervising these two components and enforcing their orthogonality during training, our model preserves label information while exposing domain-specific variation at the same time for more accurate predictions than most feature alignment models. More importantly, it offers human-understandable explanations by mapping sparse latent dimensions to medical concepts and quantifying their contributions via targeted ablations. ExtraCare is evaluated on two real-world EHR datasets across multiple domain partition settings, demonstrating superior performance along with enhanced transparency, as evidenced by its accurate predictions and explanations from extensive case studies.
Related papers
- Adversarial Hospital-Invariant Feature Learning for WSI Patch Classification [1.3955246652599635]
We present the first systematic study of domain bias in pathology foundation models (PFMs) arising from hospital source characteristics.<n>We propose a lightweight adversarial framework that removes latent hospital-specific features from frozen representations without modifying the encoder itself.
arXiv Detail & Related papers (2025-08-20T15:25:16Z) - Towards Clinician-Preferred Segmentation: Leveraging Human-in-the-Loop for Test Time Adaptation in Medical Image Segmentation [10.65123164779962]
Deep learning-based medical image segmentation models often face performance degradation when deployed across various medical centers.
We propose a novel Human-in-the-loop TTA framework that capitalizes on the largely overlooked potential of clinician-corrected predictions.
Our framework conceives a divergence loss, designed specifically to diminish the prediction divergence instigated by domain disparities.
arXiv Detail & Related papers (2024-05-14T02:02:15Z) - What Matters When Repurposing Diffusion Models for General Dense Perception Tasks? [49.84679952948808]
Recent works show promising results by simply fine-tuning T2I diffusion models for dense perception tasks.<n>We conduct a thorough investigation into critical factors that affect transfer efficiency and performance when using diffusion priors.<n>Our work culminates in the development of GenPercept, an effective deterministic one-step fine-tuning paradigm tailed for dense visual perception tasks.
arXiv Detail & Related papers (2024-03-10T04:23:24Z) - MPRE: Multi-perspective Patient Representation Extractor for Disease
Prediction [3.914545513460964]
We propose the Multi-perspective Patient Representation Extractor (MPRE) for disease prediction.
Specifically, we propose Frequency Transformation Module (FTM) to extract the trend and variation information of dynamic features.
In the 2D Multi-Extraction Network (2D MEN), we form the 2D temporal tensor based on trend and variation.
We also propose the First-Order Difference Attention Mechanism (FODAM) to calculate the contributions of differences in adjacent variations to the disease diagnosis.
arXiv Detail & Related papers (2024-01-01T13:52:05Z) - DARE: Towards Robust Text Explanations in Biomedical and Healthcare
Applications [54.93807822347193]
We show how to adapt attribution robustness estimation methods to a given domain, so as to take into account domain-specific plausibility.
Next, we provide two methods, adversarial training and FAR training, to mitigate the brittleness characterized by DARE.
Finally, we empirically validate our methods with extensive experiments on three established biomedical benchmarks.
arXiv Detail & Related papers (2023-07-05T08:11:40Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Prior Knowledge-Guided Attention in Self-Supervised Vision Transformers [79.60022233109397]
We present spatial prior attention (SPAN), a framework that takes advantage of consistent spatial and semantic structure in unlabeled image datasets.
SPAN operates by regularizing attention masks from separate transformer heads to follow various priors over semantic regions.
We find that the resulting attention masks are more interpretable than those derived from domain-agnostic pretraining.
arXiv Detail & Related papers (2022-09-07T02:30:36Z) - Robust and Efficient Segmentation of Cross-domain Medical Images [37.38861543166964]
We propose a generalizable knowledge distillation method for robust and efficient segmentation of medical images.
We propose two generalizable knowledge distillation schemes, Dual Contrastive Graph Distillation (DCGD) and Domain-Invariant Cross Distillation (DICD)
In DICD, the domain-invariant semantic vectors from the two models (i.e., teacher and student) are leveraged to cross-reconstruct features by the header exchange of MSAN.
arXiv Detail & Related papers (2022-07-26T15:55:36Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - BiteNet: Bidirectional Temporal Encoder Network to Predict Medical
Outcomes [53.163089893876645]
We propose a novel self-attention mechanism that captures the contextual dependency and temporal relationships within a patient's healthcare journey.
An end-to-end bidirectional temporal encoder network (BiteNet) then learns representations of the patient's journeys.
We have evaluated the effectiveness of our methods on two supervised prediction and two unsupervised clustering tasks with a real-world EHR dataset.
arXiv Detail & Related papers (2020-09-24T00:42:36Z) - Deep Transparent Prediction through Latent Representation Analysis [0.0]
The paper presents a novel deep learning approach, which extracts latent information from trained Deep Neural Networks (DNNs) and derives concise representations that are analyzed in an effective, unified way for prediction purposes.
Transparency combined with high prediction accuracy are the targeted goals of the proposed approach.
arXiv Detail & Related papers (2020-09-13T19:21:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.