Towards Generalising Neural Implicit Representations
- URL: http://arxiv.org/abs/2101.12690v1
- Date: Fri, 29 Jan 2021 17:20:22 GMT
- Title: Towards Generalising Neural Implicit Representations
- Authors: Theo W. Costain, Victor Adrian Prisacariu
- Abstract summary: We argue that training neural representations for both reconstruction tasks, alongside conventional tasks, can produce more general encodings.
Our approach learns feature rich encodings that produce high quality results for each task.
We also reformulate the segmentation task, creating a more representative challenge for implicit representation contexts.
- Score: 15.728196666021665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural implicit representations have shown substantial improvements in
efficiently storing 3D data, when compared to conventional formats. However,
the focus of existing work has mainly been on storage and subsequent
reconstruction. In this work, we argue that training neural representations for
both reconstruction tasks, alongside conventional tasks, can produce more
general encodings that admit equal quality reconstructions to single task
training, whilst providing improved results on conventional tasks when compared
to single task encodings. Through multi-task experiments on reconstruction,
classification, and segmentation our approach learns feature rich encodings
that produce high quality results for each task. We also reformulate the
segmentation task, creating a more representative challenge for implicit
representation contexts.
Related papers
- Towards Task-Compatible Compressible Representations [0.7980273012483663]
We investigate an issue in multi-task learnable compression, in which a representation learned for one task does not positively contribute to the rate-distortion performance of a different task.
In learnable scalable coding, previous work increased the utilization of side-information for input reconstruction by also rewarding input reconstruction when learning this shared representation.
We perform experiments using representations trained for object detection on COCO 2017 and depth estimation on the Cityscapes dataset, and use them to assist in image reconstruction and semantic segmentation tasks.
arXiv Detail & Related papers (2024-05-16T16:47:46Z) - Contextualising Implicit Representations for Semantic Tasks [5.453372578880444]
Prior works have demonstrated that implicit representations trained only for reconstruction tasks typically generate encodings not useful for semantic tasks.
We propose a method that contextualises the encodings of implicit representations, enabling their use in downstream tasks.
arXiv Detail & Related papers (2023-05-22T17:59:58Z) - Real-World Compositional Generalization with Disentangled
Sequence-to-Sequence Learning [81.24269148865555]
A recently proposed Disentangled sequence-to-sequence model (Dangle) shows promising generalization capability.
We introduce two key modifications to this model which encourage more disentangled representations and improve its compute and memory efficiency.
Specifically, instead of adaptively re-encoding source keys and values at each time step, we disentangle their representations and only re-encode keys periodically.
arXiv Detail & Related papers (2022-12-12T15:40:30Z) - Fast Inference and Transfer of Compositional Task Structures for
Few-shot Task Generalization [101.72755769194677]
We formulate it as a few-shot reinforcement learning problem where a task is characterized by a subtask graph.
Our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks.
Our experiment results on 2D grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks.
arXiv Detail & Related papers (2022-05-25T10:44:25Z) - Reconstruction Task Finds Universal Winning Tickets [24.52604301906691]
Pruning well-trained neural networks is effective to achieve a promising accuracy-efficiency trade-off in computer vision regimes.
Most of existing pruning algorithms only focus on the classification task defined on the source domain.
In this paper, we show that the image-level pretrain task is not capable of pruning models for diverse downstream tasks.
arXiv Detail & Related papers (2022-02-23T13:04:32Z) - Learning to Generalize Compositionally by Transferring Across Semantic
Parsing Tasks [37.66114618645146]
We investigate learning representations that facilitate transfer learning from one compositional task to another.
We apply this method to semantic parsing, using three very different datasets.
Our method significantly improves compositional generalization over baselines on the test set of the target task.
arXiv Detail & Related papers (2021-11-09T09:10:21Z) - Learning to Relate Depth and Semantics for Unsupervised Domain
Adaptation [87.1188556802942]
We present an approach for encoding visual task relationships to improve model performance in an Unsupervised Domain Adaptation (UDA) setting.
We propose a novel Cross-Task Relation Layer (CTRL), which encodes task dependencies between the semantic and depth predictions.
Furthermore, we propose an Iterative Self-Learning (ISL) training scheme, which exploits semantic pseudo-labels to provide extra supervision on the target domain.
arXiv Detail & Related papers (2021-05-17T13:42:09Z) - Reparameterizing Convolutions for Incremental Multi-Task Learning
without Task Interference [75.95287293847697]
Two common challenges in developing multi-task models are often overlooked in literature.
First, enabling the model to be inherently incremental, continuously incorporating information from new tasks without forgetting the previously learned ones (incremental learning)
Second, eliminating adverse interactions amongst tasks, which has been shown to significantly degrade the single-task performance in a multi-task setup (task interference)
arXiv Detail & Related papers (2020-07-24T14:44:46Z) - Data-driven Regularization via Racecar Training for Generalizing Neural
Networks [28.08782668165276]
We propose a novel training approach for improving the generalization in neural networks.
We show how our formulation is easy to realize in practical network architectures via a reverse pass.
Networks trained with our approach show more balanced mutual information between input and output throughout all layers, yield improved explainability and, exhibit improved performance for a variety of tasks and task transfers.
arXiv Detail & Related papers (2020-06-30T18:00:41Z) - Probing Linguistic Features of Sentence-Level Representations in Neural
Relation Extraction [80.38130122127882]
We introduce 14 probing tasks targeting linguistic properties relevant to neural relation extraction (RE)
We use them to study representations learned by more than 40 different encoder architecture and linguistic feature combinations trained on two datasets.
We find that the bias induced by the architecture and the inclusion of linguistic features are clearly expressed in the probing task performance.
arXiv Detail & Related papers (2020-04-17T09:17:40Z) - Generalized Hindsight for Reinforcement Learning [154.0545226284078]
We argue that low-reward data collected while trying to solve one task provides little to no signal for solving that particular task.
We present Generalized Hindsight: an approximate inverse reinforcement learning technique for relabeling behaviors with the right tasks.
arXiv Detail & Related papers (2020-02-26T18:57:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.