Glocal Information Bottleneck for Time Series Imputation
- URL: http://arxiv.org/abs/2510.04910v1
- Date: Mon, 06 Oct 2025 15:24:44 GMT
- Title: Glocal Information Bottleneck for Time Series Imputation
- Authors: Jie Yang, Kexin Zhang, Guibin Zhang, Philip S. Yu, Kaize Ding,
- Abstract summary: Time Series Imputation aims to recover missing values in temporal data.<n>Existing models typically optimize the point-wise reconstruction loss, focusing on recovering numerical values (local information)<n>We propose a new training paradigm, Glocal Information Bottleneck (Glocal-IB)
- Score: 70.41814118117311
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time Series Imputation (TSI), which aims to recover missing values in temporal data, remains a fundamental challenge due to the complex and often high-rate missingness in real-world scenarios. Existing models typically optimize the point-wise reconstruction loss, focusing on recovering numerical values (local information). However, we observe that under high missing rates, these models still perform well in the training phase yet produce poor imputations and distorted latent representation distributions (global information) in the inference phase. This reveals a critical optimization dilemma: current objectives lack global guidance, leading models to overfit local noise and fail to capture global information of the data. To address this issue, we propose a new training paradigm, Glocal Information Bottleneck (Glocal-IB). Glocal-IB is model-agnostic and extends the standard IB framework by introducing a Global Alignment loss, derived from a tractable mutual information approximation. This loss aligns the latent representations of masked inputs with those of their originally observed counterparts. It helps the model retain global structure and local details while suppressing noise caused by missing values, giving rise to better generalization under high missingness. Extensive experiments on nine datasets confirm that Glocal-IB leads to consistently improved performance and aligned latent representations under missingness. Our code implementation is available in https://github.com/Muyiiiii/NeurIPS-25-Glocal-IB.
Related papers
- FedNSAM:Consistency of Local and Global Flatness for Federated Learning [26.41380732455181]
We propose a novel textbfFedNSAM algorithm that accelerates the SAM algorithm by introducing global Nesterov momentum into the local update.<n>textbfFedNSAM uses the global Nesterov momentum as the direction of local estimation of client global perturbations and extrapolation.<n> Empirically, we conduct comprehensive experiments on CNN and Transformer models to verify the superior performance and efficiency of textbfFedNSAM.
arXiv Detail & Related papers (2026-02-27T09:07:47Z) - Sharpness-aware Federated Graph Learning [16.148982247077157]
One of many impediments to applying graph neural networks (GNNs) to large-scale real-world graph data is the challenge of centralized training.<n> Federated graph learning (FGL) addresses this by enabling collaborative GNN model training without sharing private data.
arXiv Detail & Related papers (2025-12-18T06:57:13Z) - Revisiting Multivariate Time Series Forecasting with Missing Values [74.56971641937771]
Missing values are common in real-world time series.<n>Current approaches have developed an imputation-then-prediction framework that uses imputation modules to fill in missing values, followed by forecasting on the imputed data.<n>This framework overlooks a critical issue: there is no ground truth for the missing values, making the imputation process susceptible to errors that can degrade prediction accuracy.<n>We introduce Consistency-Regularized Information Bottleneck (CRIB), a novel framework built on the Information Bottleneck principle.
arXiv Detail & Related papers (2025-09-27T20:57:48Z) - UnLoc: Leveraging Depth Uncertainties for Floorplan Localization [80.55849461031879]
UnLoc is an efficient data-driven solution for sequential camera localization within floorplans.<n>We introduce a novel probabilistic model that incorporates uncertainty estimation, modeling depth predictions as explicit probability distributions.<n>We evaluate UnLoc on large-scale synthetic and real-world datasets, demonstrating significant improvements in terms of accuracy and robustness.
arXiv Detail & Related papers (2025-09-14T14:45:43Z) - RefiDiff: Refinement-Aware Diffusion for Efficient Missing Data Imputation [13.401822039640297]
Missing values in high-dimensional, mixed-type datasets pose significant challenges for data imputation.<n>We propose an innovative framework, RefiDiff, combining local machine learning predictions with a novel Mamba-based denoising network.<n>RefiDiff outperforms state-the-art (SOTA) methods across missing-value settings with a 4x faster training time than DDPM-based approaches.
arXiv Detail & Related papers (2025-05-20T14:51:07Z) - BEVDiffLoc: End-to-End LiDAR Global Localization in BEV View based on Diffusion Model [8.720833232645155]
Bird's-Eye-View (BEV) image is one of the most widely adopted data representations in autonomous driving.<n>We propose BEVDiffLoc, a novel framework that formulates LiDAR localization as a conditional generation of poses.
arXiv Detail & Related papers (2025-03-14T13:17:43Z) - IB-AdCSCNet:Adaptive Convolutional Sparse Coding Network Driven by Information Bottleneck [4.523653503622693]
We introduce IB-AdCSCNet, a deep learning model grounded in information bottleneck theory.
IB-AdCSCNet seamlessly integrates the information bottleneck trade-off strategy into deep networks.
Experimental results on CIFAR-10 and CIFAR-100 datasets demonstrate that IB-AdCSCNet not only matches the performance of deep residual convolutional networks but also outperforms them when handling corrupted data.
arXiv Detail & Related papers (2024-05-23T05:35:57Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning [112.69497636932955]
Federated learning aims to train models across different clients without the sharing of data for privacy considerations.
We study how data heterogeneity affects the representations of the globally aggregated models.
We propose sc FedDecorr, a novel method that can effectively mitigate dimensional collapse in federated learning.
arXiv Detail & Related papers (2022-10-01T09:04:17Z) - Revisiting Communication-Efficient Federated Learning with Balanced
Global and Local Updates [14.851898446967672]
We investigate and analyze the optimal trade-off between the number of local trainings and that of global aggregations.
Our proposed scheme can achieve a better performance in terms of the prediction accuracy, and converge much faster than the baseline schemes.
arXiv Detail & Related papers (2022-05-03T13:05:26Z) - Global Update Guided Federated Learning [11.731231528534035]
Federated learning protects data privacy and security by exchanging models instead of data.
We propose global-update-guided federated learning (FedGG), which introduces a model-cosine loss into local objective functions.
Numerical simulations show that FedGG has a significant improvement on model convergence accuracies and speeds.
arXiv Detail & Related papers (2022-04-08T08:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.