Incidental Supervision: Moving beyond Supervised Learning
- URL: http://arxiv.org/abs/2005.12339v1
- Date: Mon, 25 May 2020 18:44:53 GMT
- Title: Incidental Supervision: Moving beyond Supervised Learning
- Authors: Dan Roth
- Abstract summary: This paper describes several learning paradigms that are designed to alleviate the supervision bottleneck.
It will illustrate their benefit in the context of multiple problems, all pertaining to inducing various levels of semantic representations from text.
- Score: 72.4859717204905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine Learning and Inference methods have become ubiquitous in our attempt
to induce more abstract representations of natural language text, visual
scenes, and other messy, naturally occurring data, and support decisions that
depend on it. However, learning models for these tasks is difficult partly
because generating the necessary supervision signals for it is costly and does
not scale. This paper describes several learning paradigms that are designed to
alleviate the supervision bottleneck. It will illustrate their benefit in the
context of multiple problems, all pertaining to inducing various levels of
semantic representations from text.
Related papers
- Pixel Sentence Representation Learning [67.4775296225521]
In this work, we conceptualize the learning of sentence-level textual semantics as a visual representation learning process.
We employ visually-grounded text perturbation methods like typos and word order shuffling, resonating with human cognitive patterns, and enabling perturbation to be perceived as continuous.
Our approach is further bolstered by large-scale unsupervised topical alignment training and natural language inference supervision.
arXiv Detail & Related papers (2024-02-13T02:46:45Z) - Visual Grounding Helps Learn Word Meanings in Low-Data Regimes [47.7950860342515]
Modern neural language models (LMs) are powerful tools for modeling human sentence production and comprehension.
But to achieve these results, LMs must be trained in distinctly un-human-like ways.
Do models trained more naturalistically -- with grounded supervision -- exhibit more humanlike language learning?
We investigate this question in the context of word learning, a key sub-task in language acquisition.
arXiv Detail & Related papers (2023-10-20T03:33:36Z) - Does Deep Learning Learn to Abstract? A Systematic Probing Framework [69.2366890742283]
Abstraction is a desirable capability for deep learning models, which means to induce abstract concepts from concrete instances and flexibly apply them beyond the learning context.
We introduce a systematic probing framework to explore the abstraction capability of deep learning models from a transferability perspective.
arXiv Detail & Related papers (2023-02-23T12:50:02Z) - Brief Introduction to Contrastive Learning Pretext Tasks for Visual
Representation [0.0]
We introduce contrastive learning, a subset of unsupervised learning methods.
The purpose of contrastive learning is to embed augmented samples from the same sample near to each other while pushing away those that are not.
We offer some strategies from contrastive learning that have recently been published and are focused on pretext tasks for visual representation.
arXiv Detail & Related papers (2022-10-06T18:54:10Z) - Semantic Exploration from Language Abstractions and Pretrained
Representations [23.02024937564099]
Effective exploration is a challenge in reinforcement learning (RL)
We define novelty using semantically meaningful state abstractions.
We evaluate vision-language representations, pretrained on natural image captioning datasets.
arXiv Detail & Related papers (2022-04-08T17:08:00Z) - Visual Adversarial Imitation Learning using Variational Models [60.69745540036375]
Reward function specification remains a major impediment for learning behaviors through deep reinforcement learning.
Visual demonstrations of desired behaviors often presents an easier and more natural way to teach agents.
We develop a variational model-based adversarial imitation learning algorithm.
arXiv Detail & Related papers (2021-07-16T00:15:18Z) - Improving Disentangled Text Representation Learning with
Information-Theoretic Guidance [99.68851329919858]
discrete nature of natural language makes disentangling of textual representations more challenging.
Inspired by information theory, we propose a novel method that effectively manifests disentangled representations of text.
Experiments on both conditional text generation and text-style transfer demonstrate the high quality of our disentangled representation.
arXiv Detail & Related papers (2020-06-01T03:36:01Z) - Adaptive Transformers for Learning Multimodal Representations [6.09170287691728]
We extend adaptive approaches to learn more about model interpretability and computational efficiency.
We study attention spans, sparse, and structured dropout methods to help understand how their attention mechanism extends for vision and language tasks.
arXiv Detail & Related papers (2020-05-15T12:12:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.