Contextualising Implicit Representations for Semantic Tasks
- URL: http://arxiv.org/abs/2305.13312v1
- Date: Mon, 22 May 2023 17:59:58 GMT
- Title: Contextualising Implicit Representations for Semantic Tasks
- Authors: Theo W. Costain, Kejie Li, Victor A. Prisacariu
- Abstract summary: Prior works have demonstrated that implicit representations trained only for reconstruction tasks typically generate encodings not useful for semantic tasks.
We propose a method that contextualises the encodings of implicit representations, enabling their use in downstream tasks.
- Score: 5.453372578880444
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prior works have demonstrated that implicit representations trained only for
reconstruction tasks typically generate encodings that are not useful for
semantic tasks. In this work, we propose a method that contextualises the
encodings of implicit representations, enabling their use in downstream tasks
(e.g. semantic segmentation), without requiring access to the original training
data or encoding network. Using an implicit representation trained for a
reconstruction task alone, our contextualising module takes an encoding trained
for reconstruction only and reveals meaningful semantic information that is
hidden in the encodings, without compromising the reconstruction performance.
With our proposed module, it becomes possible to pre-train implicit
representations on larger datasets, improving their reconstruction performance
compared to training on only a smaller labelled dataset, whilst maintaining
their segmentation performance on the labelled dataset. Importantly, our method
allows for future foundation implicit representation models to be fine-tuned on
unseen tasks, regardless of encoder or dataset availability.
Related papers
- Conjunct Resolution in the Face of Verbal Omissions [51.220650412095665]
We propose a conjunct resolution task that operates directly on the text and makes use of a split-and-rephrase paradigm in order to recover the missing elements in the coordination structure.
We curate a large dataset, containing over 10K examples of naturally-occurring verbal omissions with crowd-sourced annotations.
We train various neural baselines for this task, and show that while our best method obtains decent performance, it leaves ample space for improvement.
arXiv Detail & Related papers (2023-05-26T08:44:02Z) - Class Enhancement Losses with Pseudo Labels for Zero-shot Semantic
Segmentation [40.09476732999614]
Mask proposal models have significantly improved the performance of zero-shot semantic segmentation.
The use of a background' embedding during training in these methods is problematic as the resulting model tends to over-learn and assign all unseen classes as the background class instead of their correct labels.
This paper proposes novel class enhancement losses to bypass the use of the background embbedding during training, and simultaneously exploit the semantic relationship between text embeddings and mask proposals by ranking the similarity scores.
arXiv Detail & Related papers (2023-01-18T06:55:02Z) - Self-Supervised Learning via Maximum Entropy Coding [57.56570417545023]
We propose Maximum Entropy Coding (MEC) as a principled objective that explicitly optimize on the structure of the representation.
MEC learns a more generalizable representation than previous methods based on specific pretext tasks.
It achieves state-of-the-art performance consistently on various downstream tasks, including not only ImageNet linear probe, but also semi-supervised classification, object detection, instance segmentation, and object tracking.
arXiv Detail & Related papers (2022-10-20T17:58:30Z) - Sentence Representation Learning with Generative Objective rather than
Contrastive Objective [86.01683892956144]
We propose a novel generative self-supervised learning objective based on phrase reconstruction.
Our generative learning achieves powerful enough performance improvement and outperforms the current state-of-the-art contrastive methods.
arXiv Detail & Related papers (2022-10-16T07:47:46Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Self-Supervised Visual Representation Learning with Semantic Grouping [50.14703605659837]
We tackle the problem of learning visual representations from unlabeled scene-centric data.
We propose contrastive learning from data-driven semantic slots, namely SlotCon, for joint semantic grouping and representation learning.
arXiv Detail & Related papers (2022-05-30T17:50:59Z) - DirectProbe: Studying Representations without Classifiers [21.23284793831221]
DirectProbe studies the geometry of a representation by building upon the notion of a version space for a task.
Experiments with several linguistic tasks and contextualized embeddings show that, even without training classifiers, DirectProbe can shine light into how an embedding space represents labels.
arXiv Detail & Related papers (2021-04-13T02:40:26Z) - Towards Generalising Neural Implicit Representations [15.728196666021665]
We argue that training neural representations for both reconstruction tasks, alongside conventional tasks, can produce more general encodings.
Our approach learns feature rich encodings that produce high quality results for each task.
We also reformulate the segmentation task, creating a more representative challenge for implicit representation contexts.
arXiv Detail & Related papers (2021-01-29T17:20:22Z) - Predicting What You Already Know Helps: Provable Self-Supervised
Learning [60.27658820909876]
Self-supervised representation learning solves auxiliary prediction tasks (known as pretext tasks) without requiring labeled data.
We show a mechanism exploiting the statistical connections between certain em reconstruction-based pretext tasks that guarantee to learn a good representation.
We prove the linear layer yields small approximation error even for complex ground truth function class.
arXiv Detail & Related papers (2020-08-03T17:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.