Joint Manifold Learning and Optimal Transport for Dynamic Imaging
- URL: http://arxiv.org/abs/2505.11913v1
- Date: Sat, 17 May 2025 08:56:30 GMT
- Title: Joint Manifold Learning and Optimal Transport for Dynamic Imaging
- Authors: Sven Dummer, Puru Vaish, Christoph Brune,
- Abstract summary: We investigate the effect of integrating a low-dimensionality assumption of the underlying image manifold with an OT regularizer for time-evolving images.<n>We propose a latent model representation of the underlying image manifold and promote consistency between this representation, the time series data, and the OT prior on the time-evolving images.
- Score: 1.2016264781280588
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic imaging is critical for understanding and visualizing dynamic biological processes in medicine and cell biology. These applications often encounter the challenge of a limited amount of time series data and time points, which hinders learning meaningful patterns. Regularization methods provide valuable prior knowledge to address this challenge, enabling the extraction of relevant information despite the scarcity of time-series data and time points. In particular, low-dimensionality assumptions on the image manifold address sample scarcity, while time progression models, such as optimal transport (OT), provide priors on image development to mitigate the lack of time points. Existing approaches using low-dimensionality assumptions disregard a temporal prior but leverage information from multiple time series. OT-prior methods, however, incorporate the temporal prior but regularize only individual time series, ignoring information from other time series of the same image modality. In this work, we investigate the effect of integrating a low-dimensionality assumption of the underlying image manifold with an OT regularizer for time-evolving images. In particular, we propose a latent model representation of the underlying image manifold and promote consistency between this representation, the time series data, and the OT prior on the time-evolving images. We discuss the advantages of enriching OT interpolations with latent models and integrating OT priors into latent models.
Related papers
- Training-Free Time-Series Anomaly Detection: Leveraging Image Foundation Models [0.0]
We propose an image-based, training-free time-series anomaly detection (ITF-TAD) approach.
ITF-TAD converts time-series data into images using wavelet transform and compresses them into a single representation, leveraging image foundation models for anomaly detection.
arXiv Detail & Related papers (2024-08-27T03:12:08Z) - TimeLDM: Latent Diffusion Model for Unconditional Time Series Generation [2.4454605633840143]
Time series generation is a crucial research topic in the area of decision-making systems.
Recent approaches focus on learning in the data space to model time series information.
We propose TimeLDM, a novel latent diffusion model for high-quality time series generation.
arXiv Detail & Related papers (2024-07-05T01:47:20Z) - PDETime: Rethinking Long-Term Multivariate Time Series Forecasting from
the perspective of partial differential equations [49.80959046861793]
We present PDETime, a novel LMTF model inspired by the principles of Neural PDE solvers.
Our experimentation across seven diversetemporal real-world LMTF datasets reveals that PDETime adapts effectively to the intrinsic nature of the data.
arXiv Detail & Related papers (2024-02-25T17:39:44Z) - Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - Time Series as Images: Vision Transformer for Irregularly Sampled Time
Series [32.99466250557855]
This paper introduces a novel perspective by converting irregularly sampled time series into line graph images.
We then utilize powerful pre-trained vision transformers for time series classification in the same way as image classification.
Remarkably, despite its simplicity, our approach outperforms state-of-the-art specialized algorithms on several popular healthcare and human activity datasets.
arXiv Detail & Related papers (2023-03-01T22:42:44Z) - Learning to Exploit Temporal Structure for Biomedical Vision-Language
Processing [53.89917396428747]
Self-supervised learning in vision-language processing exploits semantic alignment between imaging and text modalities.
We explicitly account for prior images and reports when available during both training and fine-tuning.
Our approach, named BioViL-T, uses a CNN-Transformer hybrid multi-image encoder trained jointly with a text model.
arXiv Detail & Related papers (2023-01-11T16:35:33Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Multivariate Time Series Forecasting with Dynamic Graph Neural ODEs [65.18780403244178]
We propose a continuous model to forecast Multivariate Time series with dynamic Graph neural Ordinary Differential Equations (MTGODE)
Specifically, we first abstract multivariate time series into dynamic graphs with time-evolving node features and unknown graph structures.
Then, we design and solve a neural ODE to complement missing graph topologies and unify both spatial and temporal message passing.
arXiv Detail & Related papers (2022-02-17T02:17:31Z) - On Feature Normalization and Data Augmentation [55.115583969831]
Moment Exchange encourages the model to utilize the moment information also for recognition models.
We replace the moments of the learned features of one training image by those of another, and also interpolate the target labels.
As our approach is fast, operates entirely in feature space, and mixes different signals than prior methods, one can effectively combine it with existing augmentation approaches.
arXiv Detail & Related papers (2020-02-25T18:59:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.