Transitivity Recovering Decompositions: Interpretable and Robust
Fine-Grained Relationships
- URL: http://arxiv.org/abs/2310.15999v1
- Date: Tue, 24 Oct 2023 16:48:56 GMT
- Title: Transitivity Recovering Decompositions: Interpretable and Robust
Fine-Grained Relationships
- Authors: Abhra Chaudhuri, Massimiliano Mancini, Zeynep Akata, Anjan Dutta
- Abstract summary: Transitivity Recovering Decompositions (TRD) is a graph-space search algorithm that identifies interpretable equivalents of abstract emergent relationships.
We show that TRD is provably robust to noisy views, with empirical evidence also supporting this finding.
- Score: 69.04014445666142
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in fine-grained representation learning leverage
local-to-global (emergent) relationships for achieving state-of-the-art
results. The relational representations relied upon by such methods, however,
are abstract. We aim to deconstruct this abstraction by expressing them as
interpretable graphs over image views. We begin by theoretically showing that
abstract relational representations are nothing but a way of recovering
transitive relationships among local views. Based on this, we design
Transitivity Recovering Decompositions (TRD), a graph-space search algorithm
that identifies interpretable equivalents of abstract emergent relationships at
both instance and class levels, and with no post-hoc computations. We
additionally show that TRD is provably robust to noisy views, with empirical
evidence also supporting this finding. The latter allows TRD to perform at par
or even better than the state-of-the-art, while being fully interpretable.
Implementation is available at https://github.com/abhrac/trd.
Related papers
- Entity or Relation Embeddings? An Analysis of Encoding Strategies for Relation Extraction [19.019881161010474]
Relation extraction is essentially a text classification problem, which can be tackled by fine-tuning a pre-trained language model (LM)
Existing approaches therefore solve the problem in an indirect way: they fine-tune an LM to learn embeddings of the head and tail entities, and then predict the relationship from these entity embeddings.
Our hypothesis in this paper is that relation extraction models can be improved by capturing relationships in a more direct way.
arXiv Detail & Related papers (2023-12-18T09:58:19Z) - Explainable Representations for Relation Prediction in Knowledge Graphs [0.0]
We propose SEEK, a novel approach for explainable representations to support relation prediction in knowledge graphs.
It is based on identifying relevant shared semantic aspects between entities and learning representations for each subgraph.
We evaluate SEEK on two real-world relation prediction tasks: protein-protein interaction prediction and gene-disease association prediction.
arXiv Detail & Related papers (2023-06-22T06:18:40Z) - Unsupervised Learning of Structured Representations via Closed-Loop
Transcription [21.78655495464155]
This paper proposes an unsupervised method for learning a unified representation that serves both discriminative and generative purposes.
We show that a unified representation can enjoy the mutual benefits of having both.
These structured representations enable classification close to state-of-the-art unsupervised discriminative representations.
arXiv Detail & Related papers (2022-10-30T09:09:05Z) - Unifying Graph Contrastive Learning with Flexible Contextual Scopes [57.86762576319638]
We present a self-supervised learning method termed Unifying Graph Contrastive Learning with Flexible Contextual Scopes (UGCL for short)
Our algorithm builds flexible contextual representations with contextual scopes by controlling the power of an adjacency matrix.
Based on representations from both local and contextual scopes, distL optimises a very simple contrastive loss function for graph representation learning.
arXiv Detail & Related papers (2022-10-17T07:16:17Z) - Sparse Relational Reasoning with Object-Centric Representations [78.83747601814669]
We investigate the composability of soft-rules learned by relational neural architectures when operating over object-centric representations.
We find that increasing sparsity, especially on features, improves the performance of some models and leads to simpler relations.
arXiv Detail & Related papers (2022-07-15T14:57:33Z) - On Neural Architecture Inductive Biases for Relational Tasks [76.18938462270503]
We introduce a simple architecture based on similarity-distribution scores which we name Compositional Network generalization (CoRelNet)
We find that simple architectural choices can outperform existing models in out-of-distribution generalizations.
arXiv Detail & Related papers (2022-06-09T16:24:01Z) - Robust Contrastive Learning against Noisy Views [79.71880076439297]
We propose a new contrastive loss function that is robust against noisy views.
We show that our approach provides consistent improvements over the state-of-the-art image, video, and graph contrastive learning benchmarks.
arXiv Detail & Related papers (2022-01-12T05:24:29Z) - Self-Supervised Learning Disentangled Group Representation as Feature [82.07737719232972]
We show that existing Self-Supervised Learning (SSL) only disentangles simple augmentation features such as rotation and colorization.
We propose an iterative SSL algorithm: Iterative Partition-based Invariant Risk Minimization (IP-IRM)
We prove that IP-IRM converges to a fully disentangled representation and show its effectiveness on various benchmarks.
arXiv Detail & Related papers (2021-10-28T16:12:33Z) - Logic-guided Semantic Representation Learning for Zero-Shot Relation
Classification [31.887770824130957]
We propose a novel logic-guided semantic representation learning model for zero-shot relation classification.
Our approach builds connections between seen and unseen relations via implicit and explicit semantic representations with knowledge graph embeddings and logic rules.
arXiv Detail & Related papers (2020-10-30T04:30:09Z) - Explanation-based Weakly-supervised Learning of Visual Relations with
Graph Networks [7.199745314783952]
This paper introduces a novel weakly-supervised method for visual relationship detection that relies on minimal image-level predicate labels.
A graph neural network is trained to classify predicates in images from a graph representation of detected objects, implicitly encoding an inductive bias for pairwise relations.
We present results comparable to recent fully- and weakly-supervised methods on three diverse and challenging datasets.
arXiv Detail & Related papers (2020-06-16T23:14:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.