Sparse Hyperparametric Itakura-Saito NMF via Bi-Level Optimization
- URL: http://arxiv.org/abs/2502.17123v2
- Date: Tue, 25 Feb 2025 12:24:35 GMT
- Title: Sparse Hyperparametric Itakura-Saito NMF via Bi-Level Optimization
- Authors: Laura Selicato, Flavia Esposito, Andersen Ang, Nicoletta Del Buono, Rafal Zdunek,
- Abstract summary: We propose a new algorithm called SHINBO, which introduces a bi-level optimization framework to automatically and adaptively tune the row-dependent penalty hyper parameters.<n> Experimental results showed SHINBO ensures precise spectral decomposition and demonstrates superior performance in both synthetic and real-world applications.
- Score: 1.5379084885764847
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The selection of penalty hyperparameters is a critical aspect in Nonnegative Matrix Factorization (NMF), since these values control the trade-off between the reconstruction accuracy and the adherence to desired constraints. In this work, we focus on an NMF problem involving the Itakura-Saito (IS) divergence, effective for extracting low spectral density components from spectrograms of mixed signals, enhanced with sparsity constraints. We propose a new algorithm called SHINBO, which introduces a bi-level optimization framework to automatically and adaptively tune the row-dependent penalty hyperparameters, enhancing the ability of IS-NMF to isolate sparse, periodic signals against noise. Experimental results showed SHINBO ensures precise spectral decomposition and demonstrates superior performance in both synthetic and real-world applications. For the latter, SHINBO is particularly useful, as noninvasive vibration-based fault detection in rolling bearings, where the desired signal components often reside in high-frequency subbands but are obscured by stronger, spectrally broader noise. By addressing the critical issue of hyperparameter selection, SHINBO advances the state-of-the-art in signal recovery for complex, noise-dominated environments.
Related papers
- Triply Laplacian Scale Mixture Modeling for Seismic Data Noise Suppression [51.87076090814921]
Sparsity-based tensor recovery methods have shown great potential in suppressing seismic data noise.
We propose a novel triply Laplacian scale mixture (TLSM) approach for seismic data noise suppression.
arXiv Detail & Related papers (2025-02-20T08:28:01Z) - Enhanced Confocal Laser Scanning Microscopy with Adaptive Physics Informed Deep Autoencoders [0.0]
We present a physics-informed deep learning framework to address limitations in Confocal Laser Scanning Microscopy.<n>The model reconstructs high fidelity images from heavily noisy inputs by using convolutional and transposed convolutional layers.
arXiv Detail & Related papers (2025-01-24T18:32:34Z) - FM2S: Towards Spatially-Correlated Noise Modeling in Zero-Shot Fluorescence Microscopy Image Denoising [33.383511185170214]
Fluorescence Micrograph to Self (FM2S) is a zero-shot denoiser that achieves efficient Fluorescence Micrograph to Self (FM2S) denoising through three key innovations.
Experiments across FMI datasets demonstrate FM2S's superiority: It outperforms CVF-SID by 1.4dB PSNR on average while requiring 0.1% parameters of AP-BSN.
arXiv Detail & Related papers (2024-12-13T10:45:25Z) - DiffFNO: Diffusion Fourier Neural Operator [8.895165270489167]
We introduce DiffFNO, a novel diffusion framework for arbitrary-scale super-resolution strengthened by a Weighted Fourier Neural Operator (WFNO)
We show that DiffFNO achieves state-of-the-art (SOTA) results, outperforming existing methods across various scaling factors by a margin of 2 to 4 dB in PSNR.
Our approach sets a new standard in super-resolution, delivering both superior accuracy and computational efficiency.
arXiv Detail & Related papers (2024-11-15T03:14:11Z) - Gradient Normalization Provably Benefits Nonconvex SGD under Heavy-Tailed Noise [60.92029979853314]
We investigate the roles of gradient normalization and clipping in ensuring the convergence of Gradient Descent (SGD) under heavy-tailed noise.
Our work provides the first theoretical evidence demonstrating the benefits of gradient normalization in SGD under heavy-tailed noise.
We introduce an accelerated SGD variant incorporating gradient normalization and clipping, further enhancing convergence rates under heavy-tailed noise.
arXiv Detail & Related papers (2024-10-21T22:40:42Z) - Stable Neighbor Denoising for Source-free Domain Adaptive Segmentation [91.83820250747935]
Pseudo-label noise is mainly contained in unstable samples in which predictions of most pixels undergo significant variations during self-training.
We introduce the Stable Neighbor Denoising (SND) approach, which effectively discovers highly correlated stable and unstable samples.
SND consistently outperforms state-of-the-art methods in various SFUDA semantic segmentation settings.
arXiv Detail & Related papers (2024-06-10T21:44:52Z) - ROPO: Robust Preference Optimization for Large Language Models [59.10763211091664]
We propose an iterative alignment approach that integrates noise-tolerance and filtering of noisy samples without the aid of external models.
Experiments on three widely-used datasets with Mistral-7B and Llama-2-7B demonstrate that ROPO significantly outperforms existing preference alignment methods.
arXiv Detail & Related papers (2024-04-05T13:58:51Z) - Amplitude-Varying Perturbation for Balancing Privacy and Utility in
Federated Learning [86.08285033925597]
This paper presents a new DP perturbation mechanism with a time-varying noise amplitude to protect the privacy of federated learning.
We derive an online refinement of the series to prevent FL from premature convergence resulting from excessive perturbation noise.
The contribution of the new DP mechanism to the convergence and accuracy of privacy-preserving FL is corroborated, compared to the state-of-the-art Gaussian noise mechanism with a persistent noise amplitude.
arXiv Detail & Related papers (2023-03-07T22:52:40Z) - Residual Degradation Learning Unfolding Framework with Mixing Priors
across Spectral and Spatial for Compressive Spectral Imaging [29.135848304404533]
coded aperture snapshot spectral imaging (CASSI) is proposed.
core problem of the CASSI system is to recover the reliable and fine underlying 3D spectral cube from the 2D measurement.
We propose a Residual Degradation Learning Unfolding Framework (RDLUF) which bridges the gap between the sensing matrix and the degradation process.
arXiv Detail & Related papers (2022-11-13T12:31:49Z) - SiNeRF: Sinusoidal Neural Radiance Fields for Joint Pose Estimation and
Scene Reconstruction [147.9379707578091]
NeRFmm is the Neural Radiance Fields (NeRF) that deal with Joint Optimization tasks.
Despite NeRFmm producing precise scene synthesis and pose estimations, it still struggles to outperform the full-annotated baseline on challenging scenes.
We propose Sinusoidal Neural Radiance Fields (SiNeRF) that leverage sinusoidal activations for radiance mapping and a novel Mixed Region Sampling (MRS) for selecting ray batch efficiently.
arXiv Detail & Related papers (2022-10-10T10:47:51Z) - Hyperspectral Image Denoising Using Non-convex Local Low-rank and Sparse
Separation with Spatial-Spectral Total Variation Regularization [49.55649406434796]
We propose a novel non particular approach to robust principal component analysis for HSI denoising.
We develop accurate approximations to both rank and sparse components.
Experiments on both simulated and real HSIs demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-01-08T11:48:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.