Multi-View Contrastive Learning for Robust Domain Adaptation in Medical Time Series Analysis
- URL: http://arxiv.org/abs/2506.22393v1
- Date: Fri, 27 Jun 2025 17:06:16 GMT
- Title: Multi-View Contrastive Learning for Robust Domain Adaptation in Medical Time Series Analysis
- Authors: YongKyung Oh, Alex Bui,
- Abstract summary: Adapting machine learning models to medical time series remains a challenge due to complex temporal dependencies and dynamic distribution shifts.<n>We propose a novel framework leveraging multi-view contrastive learning to integrate temporal patterns, derivative-based dynamics, and frequency-domain features.<n>Our method employs independent encoders and a hierarchical fusion mechanism to learn feature-invariant representations that are transferable across domains.
- Score: 4.14360329494344
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adapting machine learning models to medical time series across different domains remains a challenge due to complex temporal dependencies and dynamic distribution shifts. Current approaches often focus on isolated feature representations, limiting their ability to fully capture the intricate temporal dynamics necessary for robust domain adaptation. In this work, we propose a novel framework leveraging multi-view contrastive learning to integrate temporal patterns, derivative-based dynamics, and frequency-domain features. Our method employs independent encoders and a hierarchical fusion mechanism to learn feature-invariant representations that are transferable across domains while preserving temporal coherence. Extensive experiments on diverse medical datasets, including electroencephalogram (EEG), electrocardiogram (ECG), and electromyography (EMG) demonstrate that our approach significantly outperforms state-of-the-art methods in transfer learning tasks. By advancing the robustness and generalizability of machine learning models, our framework offers a practical pathway for deploying reliable AI systems in diverse healthcare settings.
Related papers
- Multivariate Long-term Time Series Forecasting with Fourier Neural Filter [55.09326865401653]
We introduce FNF as the backbone and DBD as architecture to provide excellent learning capabilities and optimal learning pathways for spatial-temporal modeling.<n>We show that FNF unifies local time-domain and global frequency-domain information processing within a single backbone that extends naturally to spatial modeling.
arXiv Detail & Related papers (2025-06-10T18:40:20Z) - UniSTD: Towards Unified Spatio-Temporal Learning across Diverse Disciplines [64.84631333071728]
We introduce bfUnistage, a unified Transformer-based framework fortemporal modeling.<n>Our work demonstrates that a task-specific vision-text can build a generalizable model fortemporal learning.<n>We also introduce a temporal module to incorporate temporal dynamics explicitly.
arXiv Detail & Related papers (2025-03-26T17:33:23Z) - MedGNN: Towards Multi-resolution Spatiotemporal Graph Learning for Medical Time Series Classification [9.290150386783838]
We propose a Multi-resolution Spatio Graph Learning framework, MedGNN, for medical time series classification.<n>We first propose to construct multi-resolution adaptive graph structures to learn dynamic multi-scale embeddings.<n>We then propose Difference Attention Networks to operate self-attention mechanisms on the finite difference for temporal modeling.
arXiv Detail & Related papers (2025-02-06T21:34:54Z) - PMT: Progressive Mean Teacher via Exploring Temporal Consistency for Semi-Supervised Medical Image Segmentation [51.509573838103854]
We propose a semi-supervised learning framework, termed Progressive Mean Teachers (PMT), for medical image segmentation.
Our PMT generates high-fidelity pseudo labels by learning robust and diverse features in the training process.
Experimental results on two datasets with different modalities, i.e., CT and MRI, demonstrate that our method outperforms the state-of-the-art medical image segmentation approaches.
arXiv Detail & Related papers (2024-09-08T15:02:25Z) - Automated Ensemble Multimodal Machine Learning for Healthcare [52.500923923797835]
We introduce a multimodal framework, AutoPrognosis-M, that enables the integration of structured clinical (tabular) data and medical imaging using automated machine learning.
AutoPrognosis-M incorporates 17 imaging models, including convolutional neural networks and vision transformers, and three distinct multimodal fusion strategies.
arXiv Detail & Related papers (2024-07-25T17:46:38Z) - CViT: Continuous Vision Transformer for Operator Learning [24.1795082775376]
Continuous Vision Transformer (CViT) is a novel neural operator architecture that leverages advances in computer vision to address challenges in learning complex physical systems.<n>CViT combines a vision transformer encoder, a novel grid-based coordinate embedding, and a query-wise cross-attention mechanism to effectively capture multi-scale dependencies.<n>We demonstrate CViT's effectiveness across a diverse range of partial differential equation (PDE) systems, including fluid dynamics, climate modeling, and reaction-diffusion processes.
arXiv Detail & Related papers (2024-05-22T21:13:23Z) - MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial Expression Recognition in-the-wild [81.32127423981426]
Multimodal emotion recognition based on audio and video data is important for real-world applications.
Recent methods have focused on exploiting advances of self-supervised learning (SSL) for pre-training of strong multimodal encoders.
We propose a different perspective on the problem and investigate the advancement of multimodal DFER performance by adapting SSL-pre-trained disjoint unimodal encoders.
arXiv Detail & Related papers (2024-04-13T13:39:26Z) - Learning Multiscale Consistency for Self-supervised Electron Microscopy
Instance Segmentation [48.267001230607306]
We propose a pretraining framework that enhances multiscale consistency in EM volumes.
Our approach leverages a Siamese network architecture, integrating strong and weak data augmentations.
It effectively captures voxel and feature consistency, showing promise for learning transferable representations for EM analysis.
arXiv Detail & Related papers (2023-08-19T05:49:13Z) - Individualized Dosing Dynamics via Neural Eigen Decomposition [51.62933814971523]
We introduce the Neural Eigen Differential Equation algorithm (NESDE)
NESDE provides individualized modeling, tunable generalization to new treatment policies, and fast, continuous, closed-form prediction.
We demonstrate the robustness of NESDE in both synthetic and real medical problems, and use the learned dynamics to publish simulated medical gym environments.
arXiv Detail & Related papers (2023-06-24T17:01:51Z) - TS-MoCo: Time-Series Momentum Contrast for Self-Supervised Physiological
Representation Learning [8.129782272731397]
We propose a novel encoding framework that relies on self-supervised learning with momentum contrast to learn representations from various physiological domains without needing labels.
We show that our self-supervised learning approach can indeed learn discriminative features which can be exploited in downstream classification tasks.
arXiv Detail & Related papers (2023-06-10T21:17:42Z) - MVMTnet: A Multi-variate Multi-modal Transformer for Multi-class
Classification of Cardiac Irregularities Using ECG Waveforms and Clinical
Notes [4.648677931378919]
Deep learning can be used to optimize diagnosis and patient monitoring for clinical-based applications.
For cardiovascular disease, one such condition where the rising number of patients increasingly outweighs the availability of medical resources in different parts of the world, a core challenge is the automated classification of various cardiac abnormalities.
The proposed novel multi-modal Transformer architecture would be able to accurately perform this task while demonstrating the cross-domain effectiveness of Transformers.
arXiv Detail & Related papers (2023-02-21T21:38:41Z) - Studying Robustness of Semantic Segmentation under Domain Shift in
cardiac MRI [0.8858288982748155]
We study challenges and opportunities of domain transfer across images from multiple clinical centres and scanner vendors.
In this work, we build upon a fixed U-Net architecture configured by the nnU-net framework to investigate various data augmentation techniques and batch normalization layers.
arXiv Detail & Related papers (2020-11-15T17:50:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.