LIFE: Learning Individual Features for Multivariate Time Series
Prediction with Missing Values
- URL: http://arxiv.org/abs/2109.14844v1
- Date: Thu, 30 Sep 2021 04:53:24 GMT
- Title: LIFE: Learning Individual Features for Multivariate Time Series
Prediction with Missing Values
- Authors: Zhao-Yu Zhang, Shao-Qun Zhang, Yuan Jiang, and Zhi-Hua Zhou
- Abstract summary: We propose a Learning Individual Features (LIFE) framework, which provides a new paradigm for MTS prediction with missing values.
LIFE generates reliable features for prediction by using the correlated dimensions as auxiliary information and suppressing the interference from uncorrelated dimensions with missing values.
Experiments on three real-world data sets verify the superiority of LIFE to existing state-of-the-art models.
- Score: 71.52335136040664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multivariate time series (MTS) prediction is ubiquitous in real-world fields,
but MTS data often contains missing values. In recent years, there has been an
increasing interest in using end-to-end models to handle MTS with missing
values. To generate features for prediction, existing methods either merge all
input dimensions of MTS or tackle each input dimension independently. However,
both approaches are hard to perform well because the former usually produce
many unreliable features and the latter lacks correlated information. In this
paper, we propose a Learning Individual Features (LIFE) framework, which
provides a new paradigm for MTS prediction with missing values. LIFE generates
reliable features for prediction by using the correlated dimensions as
auxiliary information and suppressing the interference from uncorrelated
dimensions with missing values. Experiments on three real-world data sets
verify the superiority of LIFE to existing state-of-the-art models.
Related papers
- Scalable Numerical Embeddings for Multivariate Time Series: Enhancing Healthcare Data Representation Learning [6.635084843592727]
We propose SCAlable Numerical Embedding (SCANE), a novel framework that treats each feature value as an independent token.
SCANE regularizes the traits of distinct feature embeddings and enhances representational learning through a scalable embedding mechanism.
We develop the nUMerical eMbeddIng Transformer (SUMMIT), which is engineered to deliver precise predictive outputs for MTS characterized by prevalent missing entries.
arXiv Detail & Related papers (2024-05-26T13:06:45Z) - Debiasing Multimodal Models via Causal Information Minimization [65.23982806840182]
We study bias arising from confounders in a causal graph for multimodal data.
Robust predictive features contain diverse information that helps a model generalize to out-of-distribution data.
We use these features as confounder representations and use them via methods motivated by causal theory to remove bias from models.
arXiv Detail & Related papers (2023-11-28T16:46:14Z) - Beyond Sharing: Conflict-Aware Multivariate Time Series Anomaly
Detection [18.796225184893874]
We introduce CAD, a Conflict-aware Anomaly Detection algorithm.
We find that the poor performance of vanilla MMoE mainly comes from the input-output misalignment settings of MTS formulation.
We show that CAD obtains an average F1-score of 0.943 across three public datasets, notably outperforming state-of-the-art methods.
arXiv Detail & Related papers (2023-08-17T11:00:01Z) - Sharing pattern submodels for prediction with missing values [12.981974894538668]
Missing values are unavoidable in many applications of machine learning and present challenges both during training and at test time.
We propose an alternative approach, called sharing pattern submodels, which i) makes predictions robust to missing values at test time, ii) maintains or improves the predictive power of pattern submodels andiii) has a short description, enabling improved interpretability.
arXiv Detail & Related papers (2022-06-22T15:09:40Z) - The DONUT Approach to EnsembleCombination Forecasting [0.0]
This paper presents an ensemble forecasting method that shows strong results on the M4Competition dataset.
Our assumption reductions, consisting mainly of auto-generated features and a more diverse model pool, significantly outperforms the statistical-feature-based ensemble method FFORMA.
We also present a formal ex-post-facto analysis of optimal combination and selection for ensembles, quantifying differences through linear optimization on the M4 dataset.
arXiv Detail & Related papers (2022-01-02T22:19:26Z) - Networked Time Series Prediction with Incomplete Data [59.45358694862176]
We propose NETS-ImpGAN, a novel deep learning framework that can be trained on incomplete data with missing values in both history and future.
We conduct extensive experiments on three real-world datasets under different missing patterns and missing rates.
arXiv Detail & Related papers (2021-10-05T18:20:42Z) - Uncertainty Prediction for Machine Learning Models of Material
Properties [0.0]
Uncertainty in AI-based predictions of material properties is of immense importance for the success and reliability of AI applications in material science.
We compare 3 different approaches to obtain such individual uncertainty, testing them on 12 ML-physical properties.
arXiv Detail & Related papers (2021-07-16T16:33:55Z) - M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients [151.4352001822956]
Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
arXiv Detail & Related papers (2020-06-01T05:21:37Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.