Learning Representations without Compositional Assumptions
- URL: http://arxiv.org/abs/2305.19726v1
- Date: Wed, 31 May 2023 10:36:10 GMT
- Title: Learning Representations without Compositional Assumptions
- Authors: Tennison Liu, Jeroen Berrevoets, Zhaozhi Qian, Mihaela van der Schaar
- Abstract summary: We propose a data-driven approach that learns feature set dependencies by representing feature sets as graph nodes and their relationships as learnable edges.
We also introduce LEGATO, a novel hierarchical graph autoencoder that learns a smaller, latent graph to aggregate information from multiple views dynamically.
- Score: 79.12273403390311
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper addresses unsupervised representation learning on tabular data
containing multiple views generated by distinct sources of measurement.
Traditional methods, which tackle this problem using the multi-view framework,
are constrained by predefined assumptions that assume feature sets share the
same information and representations should learn globally shared factors.
However, this assumption is not always valid for real-world tabular datasets
with complex dependencies between feature sets, resulting in localized
information that is harder to learn. To overcome this limitation, we propose a
data-driven approach that learns feature set dependencies by representing
feature sets as graph nodes and their relationships as learnable edges.
Furthermore, we introduce LEGATO, a novel hierarchical graph autoencoder that
learns a smaller, latent graph to aggregate information from multiple views
dynamically. This approach results in latent graph components that specialize
in capturing localized information from different regions of the input, leading
to superior downstream performance.
Related papers
- Discriminative Anchor Learning for Efficient Multi-view Clustering [59.11406089896875]
We propose discriminative anchor learning for multi-view clustering (DALMC)
We learn discriminative view-specific feature representations according to the original dataset.
We build anchors from different views based on these representations, which increase the quality of the shared anchor graph.
arXiv Detail & Related papers (2024-09-25T13:11:17Z) - Bridging Local Details and Global Context in Text-Attributed Graphs [62.522550655068336]
GraphBridge is a framework that bridges local and global perspectives by leveraging contextual textual information.
Our method achieves state-of-theart performance, while our graph-aware token reduction module significantly enhances efficiency and solves scalability issues.
arXiv Detail & Related papers (2024-06-18T13:35:25Z) - Hierarchical Aggregations for High-Dimensional Multiplex Graph Embedding [7.271256448682229]
HMGE is a novel embedding method based on hierarchical aggregation for high-dimensional multiplex graphs.
We leverage mutual information between local patches and global summaries to train the model without supervision.
Detailed experiments on synthetic and real-world data illustrate the suitability of our approach to downstream supervised tasks.
arXiv Detail & Related papers (2023-12-28T05:39:33Z) - End-to-End Learning on Multimodal Knowledge Graphs [0.0]
We propose a multimodal message passing network which learns end-to-end from the structure of graphs.
Our model uses dedicated (neural) encoders to naturally learn embeddings for node features belonging to five different types of modalities.
Our results indicate that end-to-end multimodal learning from any arbitrary knowledge graph is indeed possible.
arXiv Detail & Related papers (2023-09-03T13:16:18Z) - Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data [52.491074276133325]
Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
arXiv Detail & Related papers (2022-11-08T09:19:32Z) - ACTIVE:Augmentation-Free Graph Contrastive Learning for Partial
Multi-View Clustering [52.491074276133325]
We propose an augmentation-free graph contrastive learning framework to solve the problem of partial multi-view clustering.
The proposed approach elevates instance-level contrastive learning and missing data inference to the cluster-level, effectively mitigating the impact of individual missing data on clustering.
arXiv Detail & Related papers (2022-03-01T02:32:25Z) - SubTab: Subsetting Features of Tabular Data for Self-Supervised
Representation Learning [5.5616364225463055]
We introduce a new framework, Subsetting features of Tabular data (SubTab)
In this paper, we introduce a new framework, Subsetting features of Tabular data (SubTab)
We argue that reconstructing the data from the subset of its features rather than its corrupted version in an autoencoder setting can better capture its underlying representation.
arXiv Detail & Related papers (2021-10-08T20:11:09Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - End-to-End Entity Classification on Multimodal Knowledge Graphs [0.0]
We propose a multimodal message passing network which learns end-to-end from the structure of graphs.
Our model uses dedicated (neural) encoders to naturally learn embeddings for node features belonging to five different types of modalities.
Our result supports our hypothesis that including information from multiple modalities can help our models obtain a better overall performance.
arXiv Detail & Related papers (2020-03-25T14:57:52Z) - Learning Robust Representations via Multi-View Information Bottleneck [41.65544605954621]
Original formulation requires labeled data to identify superfluous information.
We extend this ability to the multi-view unsupervised setting, where two views of the same underlying entity are provided but the label is unknown.
A theoretical analysis leads to the definition of a new multi-view model that produces state-of-the-art results on the Sketchy dataset and label-limited versions of the MIR-Flickr dataset.
arXiv Detail & Related papers (2020-02-17T16:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.