Signal Recovery Using a Spiked Mixture Model
- URL: http://arxiv.org/abs/2501.01840v1
- Date: Fri, 03 Jan 2025 14:43:57 GMT
- Title: Signal Recovery Using a Spiked Mixture Model
- Authors: Paul-Louis Delacour, Sander Wahls, Jeffrey M. Spraggins, Lukasz Migas, Raf Van de Plas,
- Abstract summary: We introduce the spiked mixture model (SMM) to address the problem of estimating a set of signals from many randomly scaled and noisy observations.
We design a novel expectation-maximization (EM) algorithm to recover all parameters of the SMM.
Numerical experiments show that in low signal-to-noise ratio regimes, and for data types where the SMM is relevant, SMM surpasses the more traditional Gaussian mixture model (GMM) in terms of signal recovery performance.
- Score: 0.42661866685154193
- License:
- Abstract: We introduce the spiked mixture model (SMM) to address the problem of estimating a set of signals from many randomly scaled and noisy observations. Subsequently, we design a novel expectation-maximization (EM) algorithm to recover all parameters of the SMM. Numerical experiments show that in low signal-to-noise ratio regimes, and for data types where the SMM is relevant, SMM surpasses the more traditional Gaussian mixture model (GMM) in terms of signal recovery performance. The broad relevance of the SMM and its corresponding EM recovery algorithm is demonstrated by applying the technique to different data types. The first case study is a biomedical research application, utilizing an imaging mass spectrometry dataset to explore the molecular content of a rat brain tissue section at micrometer scale. The second case study demonstrates SMM performance in a computer vision application, segmenting a hyperspectral imaging dataset into underlying patterns. While the measurement modalities differ substantially, in both case studies SMM is shown to recover signals that were missed by traditional methods such as k-means clustering and GMM.
Related papers
- ContextMRI: Enhancing Compressed Sensing MRI through Metadata Conditioning [51.26601171361753]
We propose ContextMRI, a text-conditioned diffusion model for MRI that integrates granular metadata into the reconstruction process.
We show that increasing the fidelity of metadata, ranging from slice location and contrast to patient age, sex, and pathology, systematically boosts reconstruction performance.
arXiv Detail & Related papers (2025-01-08T05:15:43Z) - Domain-Agnostic Stroke Lesion Segmentation Using Physics-Constrained Synthetic Data [0.15749416770494706]
We propose two novel approaches using synthetic quantitative MRI (qMRI) images to enhance the robustness and generalisability of segmentation models.
We trained a qMRI estimation model to predict qMRI maps from MPRAGE images, which were used to simulate diverse MRI sequences for segmentation training.
A second approach built upon prior work in synthetic data for stroke lesion segmentation, generating qMRI maps from a dataset of tissue labels.
arXiv Detail & Related papers (2024-12-04T13:52:05Z) - Mixture of Coupled HMMs for Robust Modeling of Multivariate Healthcare
Time Series [7.5986411724707095]
We propose a novel class of models, a mixture of coupled hidden Markov models (M-CHMM)
To make the model learning feasible, we derive two algorithms to sample the sequences of the latent variables in the CHMM.
Compared to existing inference methods, our algorithms are computationally tractable, improve mixing, and allow for likelihood estimation.
arXiv Detail & Related papers (2023-11-14T02:55:37Z) - Optimizing Sampling Patterns for Compressed Sensing MRI with Diffusion
Generative Models [75.52575380824051]
We present a learning method to optimize sub-sampling patterns for compressed sensing multi-coil MRI.
We use a single-step reconstruction based on the posterior mean estimate given by the diffusion model and the MRI measurement process.
Our method requires as few as five training images to learn effective sampling patterns.
arXiv Detail & Related papers (2023-06-05T22:09:06Z) - Bridging the Usability Gap: Theoretical and Methodological Advances for Spectral Learning of Hidden Markov Models [0.8287206589886879]
The Baum-Welch (B-W) algorithm is the most widely accepted method for inferring hidden Markov models (HMM)
It is prone to getting stuck in local optima, and can be too slow for many real-time applications.
We propose a novel algorithm called projected SHMM (PSHMM) that mitigates the problem of error propagation.
arXiv Detail & Related papers (2023-02-15T02:58:09Z) - A robust estimator of mutual information for deep learning
interpretability [2.574652392763709]
We present GMM-MI, an algorithm that can be applied to both discrete and continuous settings.
We extensively validate GMM-MI on toy data for which the ground truth MI is known.
We then demonstrate the use of our MI estimator in the context of representation learning.
arXiv Detail & Related papers (2022-10-31T18:00:02Z) - Learning Hidden Markov Models When the Locations of Missing Observations
are Unknown [54.40592050737724]
We consider the general problem of learning an HMM from data with unknown missing observation locations.
We provide reconstruction algorithms that do not require any assumptions about the structure of the underlying chain.
We show that under proper specifications one can reconstruct the process dynamics as well as if the missing observations positions were known.
arXiv Detail & Related papers (2022-03-12T22:40:43Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Efficient Learning and Decoding of the Continuous-Time Hidden Markov
Model for Disease Progression Modeling [119.50438407358862]
We present the first complete characterization of efficient EM-based learning methods for CT-HMM models.
We show that EM-based learning consists of two challenges: the estimation of posterior state probabilities and the computation of end-state conditioned statistics.
We demonstrate the use of CT-HMMs with more than 100 states to visualize and predict disease progression using a glaucoma dataset and an Alzheimer's disease dataset.
arXiv Detail & Related papers (2021-10-26T20:06:05Z) - Robust Classification using Hidden Markov Models and Mixtures of
Normalizing Flows [25.543231171094384]
We use a generative model that combines the state transitions of a hidden Markov model (HMM) and the neural network based probability distributions for the hidden states of the HMM.
We verify the improved robustness of NMM-HMM classifiers in an application to speech recognition.
arXiv Detail & Related papers (2021-02-15T00:40:30Z) - Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized Machine Learning [66.18202188565922]
We propose a communication-efficient decentralized machine learning (ML) algorithm, coined QGADMM (QGADMM)
We develop a novel quantization method to adaptively adjust modelization levels and their probabilities, while proving the convergence of QGADMM for convex functions.
arXiv Detail & Related papers (2019-10-23T10:47:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.