OR-Net: Pointwise Relational Inference for Data Completion under Partial
Observation
- URL: http://arxiv.org/abs/2105.00397v2
- Date: Wed, 5 May 2021 09:28:34 GMT
- Title: OR-Net: Pointwise Relational Inference for Data Completion under Partial
Observation
- Authors: Qianyu Feng, Linchao Zhu, Bang Zhang, Pan Pan, Yi Yang
- Abstract summary: This work uses relational inference to fill in the incomplete data.
We propose Omni-Relational Network (OR-Net) to model the pointwise relativity in two aspects.
- Score: 51.083573770706636
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contemporary data-driven methods are typically fed with full supervision on
large-scale datasets which limits their applicability. However, in the actual
systems with limitations such as measurement error and data acquisition
problems, people usually obtain incomplete data. Although data completion has
attracted wide attention, the underlying data pattern and relativity are still
under-developed. Currently, the family of latent variable models allows
learning deep latent variables over observed variables by fitting the marginal
distribution. As far as we know, current methods fail to perceive the data
relativity under partial observation. Aiming at modeling incomplete data, this
work uses relational inference to fill in the incomplete data. Specifically, we
expect to approximate the real joint distribution over the partial observation
and latent variables, thus infer the unseen targets respectively. To this end,
we propose Omni-Relational Network (OR-Net) to model the pointwise relativity
in two aspects: (i) On one hand, the inner relationship is built among the
context points in the partial observation; (ii) On the other hand, the unseen
targets are inferred by learning the cross-relationship with the observed data
points. It is further discovered that the proposed method can be generalized to
different scenarios regardless of whether the physical structure can be
observed or not. It is demonstrated that the proposed OR-Net can be well
generalized for data completion tasks of various modalities, including function
regression, image completion on MNIST and CelebA datasets, and also sequential
motion generation conditioned on the observed poses.
Related papers
- Learning Divergence Fields for Shift-Robust Graph Representations [73.11818515795761]
In this work, we propose a geometric diffusion model with learnable divergence fields for the challenging problem with interdependent data.
We derive a new learning objective through causal inference, which can guide the model to learn generalizable patterns of interdependence that are insensitive across domains.
arXiv Detail & Related papers (2024-06-07T14:29:21Z) - General Identifiability and Achievability for Causal Representation
Learning [33.80247458590611]
The paper establishes identifiability and achievability results using two hard uncoupled interventions per node in the latent causal graph.
For identifiability, the paper establishes that perfect recovery of the latent causal model and variables is guaranteed under uncoupled interventions.
The analysis, additionally, recovers the identifiability result for two hard coupled interventions, that is when metadata about the pair of environments that have the same node intervened is known.
arXiv Detail & Related papers (2023-10-24T01:47:44Z) - Approximating Counterfactual Bounds while Fusing Observational, Biased
and Randomised Data Sources [64.96984404868411]
We address the problem of integrating data from multiple, possibly biased, observational and interventional studies.
We show that the likelihood of the available data has no local maxima.
We then show how the same approach can address the general case of multiple datasets.
arXiv Detail & Related papers (2023-07-31T11:28:24Z) - Probabilistic Learning of Multivariate Time Series with Temporal
Irregularity [25.91078012394032]
temporal irregularities, including nonuniform time intervals and component misalignment.
We develop a conditional flow representation to non-parametrically represent the data distribution, which is typically non-Gaussian.
The broad applicability and superiority of the proposed solution are confirmed by comparing it with existing approaches through ablation studies and testing on real-world datasets.
arXiv Detail & Related papers (2023-06-15T14:08:48Z) - On the Joint Interaction of Models, Data, and Features [82.60073661644435]
We introduce a new tool, the interaction tensor, for empirically analyzing the interaction between data and model through features.
Based on these observations, we propose a conceptual framework for feature learning.
Under this framework, the expected accuracy for a single hypothesis and agreement for a pair of hypotheses can both be derived in closed-form.
arXiv Detail & Related papers (2023-06-07T21:35:26Z) - ProtoVAE: Prototypical Networks for Unsupervised Disentanglement [1.6114012813668934]
We introduce a novel deep generative VAE-based model, ProtoVAE, that leverages a deep metric learning Prototypical network trained using self-supervision.
Our model is completely unsupervised and requires no priori knowledge of the dataset, including the number of factors.
We evaluate our proposed model on the benchmark dSprites, 3DShapes, and MPI3D disentanglement datasets.
arXiv Detail & Related papers (2023-05-16T01:29:26Z) - Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning [112.69497636932955]
Federated learning aims to train models across different clients without the sharing of data for privacy considerations.
We study how data heterogeneity affects the representations of the globally aggregated models.
We propose sc FedDecorr, a novel method that can effectively mitigate dimensional collapse in federated learning.
arXiv Detail & Related papers (2022-10-01T09:04:17Z) - On Disentangled Representations Learned From Correlated Data [59.41587388303554]
We bridge the gap to real-world scenarios by analyzing the behavior of the most prominent disentanglement approaches on correlated data.
We show that systematically induced correlations in the dataset are being learned and reflected in the latent representations.
We also demonstrate how to resolve these latent correlations, either using weak supervision during training or by post-hoc correcting a pre-trained model with a small number of labels.
arXiv Detail & Related papers (2020-06-14T12:47:34Z) - Linear predictor on linearly-generated data with missing values: non
consistency and solutions [0.0]
We study the seemingly-simple case where the target to predict is a linear function of the fully-observed data.
We show that, in the presence of missing values, the optimal predictor may not be linear.
arXiv Detail & Related papers (2020-02-03T11:49:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.