On the Generalization of Learned Structured Representations
- URL: http://arxiv.org/abs/2304.13001v1
- Date: Tue, 25 Apr 2023 17:14:36 GMT
- Title: On the Generalization of Learned Structured Representations
- Authors: Andrea Dittadi
- Abstract summary: We study methods that learn, with little or no supervision, representations of unstructured data that capture its hidden structure.
The second part of this thesis focuses on object-centric representations, which capture the compositional structure of the input in terms of symbol-like entities.
- Score: 5.1398743023989555
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite tremendous progress over the past decade, deep learning methods
generally fall short of human-level systematic generalization. It has been
argued that explicitly capturing the underlying structure of data should allow
connectionist systems to generalize in a more predictable and systematic
manner. Indeed, evidence in humans suggests that interpreting the world in
terms of symbol-like compositional entities may be crucial for intelligent
behavior and high-level reasoning. Another common limitation of deep learning
systems is that they require large amounts of training data, which can be
expensive to obtain. In representation learning, large datasets are leveraged
to learn generic data representations that may be useful for efficient learning
of arbitrary downstream tasks.
This thesis is about structured representation learning. We study methods
that learn, with little or no supervision, representations of unstructured data
that capture its hidden structure. In the first part of the thesis, we focus on
representations that disentangle the explanatory factors of variation of the
data. We scale up disentangled representation learning to a novel robotic
dataset, and perform a systematic large-scale study on the role of pretrained
representations for out-of-distribution generalization in downstream robotic
tasks. The second part of this thesis focuses on object-centric
representations, which capture the compositional structure of the input in
terms of symbol-like entities, such as objects in visual scenes. Object-centric
learning methods learn to form meaningful entities from unstructured input,
enabling symbolic information processing on a connectionist substrate. In this
study, we train a selection of methods on several common datasets, and
investigate their usefulness for downstream tasks and their ability to
generalize out of distribution.
Related papers
- Exploiting Contextual Uncertainty of Visual Data for Efficient Training of Deep Models [0.65268245109828]
We introduce the notion of contextual diversity for active learning CDAL.
We propose a data repair algorithm to curate contextually fair data to reduce model bias.
We are working on developing image retrieval system for wildlife camera trap images and reliable warning system for poor quality rural roads.
arXiv Detail & Related papers (2024-11-04T09:43:33Z) - Zero-Shot Object-Centric Representation Learning [72.43369950684057]
We study current object-centric methods through the lens of zero-shot generalization.
We introduce a benchmark comprising eight different synthetic and real-world datasets.
We find that training on diverse real-world images improves transferability to unseen scenarios.
arXiv Detail & Related papers (2024-08-17T10:37:07Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Relate to Predict: Towards Task-Independent Knowledge Representations
for Reinforcement Learning [11.245432408899092]
Reinforcement Learning can enable agents to learn complex tasks.
It is difficult to interpret the knowledge and reuse it across tasks.
In this paper, we introduce an inductive bias for explicit object-centered knowledge separation.
We show that the degree of explicitness in knowledge separation correlates with faster learning, better accuracy, better generalization, and better interpretability.
arXiv Detail & Related papers (2022-12-10T13:33:56Z) - Self-Supervised Visual Representation Learning with Semantic Grouping [50.14703605659837]
We tackle the problem of learning visual representations from unlabeled scene-centric data.
We propose contrastive learning from data-driven semantic slots, namely SlotCon, for joint semantic grouping and representation learning.
arXiv Detail & Related papers (2022-05-30T17:50:59Z) - Object Pursuit: Building a Space of Objects via Discriminative Weight
Generation [23.85039747700698]
We propose a framework to continuously learn object-centric representations for visual learning and understanding.
We leverage interactions to sample diverse variations of an object and the corresponding training signals while learning the object-centric representations.
We perform an extensive study of the key features of the proposed framework and analyze the characteristics of the learned representations.
arXiv Detail & Related papers (2021-12-15T08:25:30Z) - Generalization and Robustness Implications in Object-Centric Learning [23.021791024676986]
In this paper, we train state-of-the-art unsupervised models on five common multi-object datasets.
From our experimental study, we find object-centric representations to be generally useful for downstream tasks.
arXiv Detail & Related papers (2021-07-01T17:51:11Z) - Prototypical Representation Learning for Relation Extraction [56.501332067073065]
This paper aims to learn predictive, interpretable, and robust relation representations from distantly-labeled data.
We learn prototypes for each relation from contextual information to best explore the intrinsic semantics of relations.
Results on several relation learning tasks show that our model significantly outperforms the previous state-of-the-art relational models.
arXiv Detail & Related papers (2021-03-22T08:11:43Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - Laplacian Denoising Autoencoder [114.21219514831343]
We propose to learn data representations with a novel type of denoising autoencoder.
The noisy input data is generated by corrupting latent clean data in the gradient domain.
Experiments on several visual benchmarks demonstrate that better representations can be learned with the proposed approach.
arXiv Detail & Related papers (2020-03-30T16:52:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.