Partially Observed Exchangeable Modeling
- URL: http://arxiv.org/abs/2102.06083v1
- Date: Thu, 11 Feb 2021 15:54:18 GMT
- Title: Partially Observed Exchangeable Modeling
- Authors: Yang Li and Junier B. Oliva
- Abstract summary: We propose a novel framework, partially observed exchangeable modeling (POEx)
POEx takes in a set of related partially observed instances and infers the conditional distribution for the unobserved dimensions over multiple elements.
Our approach jointly models the intra-instance (among features in a point) and inter-instance (among multiple points in a set) dependencies in data.
- Score: 14.466964173883948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling dependencies among features is fundamental for many machine learning
tasks. Although there are often multiple related instances that may be
leveraged to inform conditional dependencies, typical approaches only model
conditional dependencies over individual instances. In this work, we propose a
novel framework, partially observed exchangeable modeling (POEx) that takes in
a set of related partially observed instances and infers the conditional
distribution for the unobserved dimensions over multiple elements. Our approach
jointly models the intra-instance (among features in a point) and
inter-instance (among multiple points in a set) dependencies in data. POEx is a
general framework that encompasses many existing tasks such as point cloud
expansion and few-shot generation, as well as new tasks like few-shot
imputation. Despite its generality, extensive empirical evaluations show that
our model achieves state-of-the-art performance across a range of applications.
Related papers
- Partial-Multivariate Model for Forecasting [28.120094495344453]
We propose a Transformer-based partial-multivariate model, PMformer, for forecasting problems.
We demonstrate that PMformer outperforms various univariate and complete-multivariate models.
We also highlight other advantages of PMformer: efficiency and robustness under missing features.
arXiv Detail & Related papers (2024-08-19T05:18:50Z) - AutoTask: Task Aware Multi-Faceted Single Model for Multi-Task Ads Relevance [2.380819994407948]
We introduce a novel multi-faceted attention model that performs task aware feature combination and cross task interaction modeling.
Our technique formulates the feature combination problem as "language" modeling with auto-regressive attentions across both feature and task dimensions.
arXiv Detail & Related papers (2024-07-09T05:13:45Z) - UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting [98.12558945781693]
We propose a transformer-based model UniTST containing a unified attention mechanism on the flattened patch tokens.
Although our proposed model employs a simple architecture, it offers compelling performance as shown in our experiments on several datasets for time series forecasting.
arXiv Detail & Related papers (2024-06-07T14:39:28Z) - UniFS: Universal Few-shot Instance Perception with Point Representations [36.943019984075065]
We propose UniFS, a universal few-shot instance perception model that unifies a wide range of instance perception tasks.
Our approach makes minimal assumptions about the tasks, yet it achieves competitive results compared to highly specialized and well optimized specialist models.
arXiv Detail & Related papers (2024-04-30T09:47:44Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - Modeling Multi-Label Action Dependencies for Temporal Action
Localization [53.53490517832068]
Real-world videos contain many complex actions with inherent relationships between action classes.
We propose an attention-based architecture that models these action relationships for the task of temporal action localization in unoccurrence videos.
We show improved performance over state-of-the-art methods on multi-label action localization benchmarks.
arXiv Detail & Related papers (2021-03-04T13:37:28Z) - Model-Invariant State Abstractions for Model-Based Reinforcement
Learning [54.616645151708994]
We introduce a new type of state abstraction called textitmodel-invariance.
This allows for generalization to novel combinations of unseen values of state variables.
We prove that an optimal policy can be learned over this model-invariance state abstraction.
arXiv Detail & Related papers (2021-02-19T10:37:54Z) - A Few-Shot Sequential Approach for Object Counting [63.82757025821265]
We introduce a class attention mechanism that sequentially attends to objects in the image and extracts their relevant features.
The proposed technique is trained on point-level annotations and uses a novel loss function that disentangles class-dependent and class-agnostic aspects of the model.
We present our results on a variety of object-counting/detection datasets, including FSOD and MS COCO.
arXiv Detail & Related papers (2020-07-03T18:23:39Z) - Relating by Contrasting: A Data-efficient Framework for Multimodal
Generative Models [86.9292779620645]
We develop a contrastive framework for generative model learning, allowing us to train the model not just by the commonality between modalities, but by the distinction between "related" and "unrelated" multimodal data.
Under our proposed framework, the generative model can accurately identify related samples from unrelated ones, making it possible to make use of the plentiful unlabeled, unpaired multimodal data.
arXiv Detail & Related papers (2020-07-02T15:08:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.