Pragmatically Informative Color Generation by Grounding Contextual
Modifiers
- URL: http://arxiv.org/abs/2010.04372v1
- Date: Fri, 9 Oct 2020 04:54:54 GMT
- Title: Pragmatically Informative Color Generation by Grounding Contextual
Modifiers
- Authors: Zhengxuan Wu, Desmond C. Ong
- Abstract summary: Given a reference color "green", and a modifier "bluey," how does one generate a color that could represent "bluey green"?
We propose a computational pragmatics model that formulates this color generation task as a recursive game between speakers and listeners.
In this paper, we show that pragmatic incorporating information provides significant improvements in performance compared with other state-of-the-art deep learning models.
- Score: 14.394987796101349
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Grounding language in contextual information is crucial for fine-grained
natural language understanding. One important task that involves grounding
contextual modifiers is color generation. Given a reference color "green", and
a modifier "bluey", how does one generate a color that could represent "bluey
green"? We propose a computational pragmatics model that formulates this color
generation task as a recursive game between speakers and listeners. In our
model, a pragmatic speaker reasons about the inferences that a listener would
make, and thus generates a modified color that is maximally informative to help
the listener recover the original referents. In this paper, we show that
incorporating pragmatic information provides significant improvements in
performance compared with other state-of-the-art deep learning models where
pragmatic inference and flexibility in representing colors from a large
continuous space are lacking. Our model has an absolute 98% increase in
performance for the test cases where the reference colors are unseen during
training, and an absolute 40% increase in performance for the test cases where
both the reference colors and the modifiers are unseen during training.
Related papers
- Collapsed Language Models Promote Fairness [88.48232731113306]
We find that debiased language models exhibit collapsed alignment between token representations and word embeddings.
We design a principled fine-tuning method that can effectively improve fairness in a wide range of debiasing methods.
arXiv Detail & Related papers (2024-10-06T13:09:48Z) - Learning High-Quality and General-Purpose Phrase Representations [9.246374019271938]
Phrase representations play an important role in data science and natural language processing.
Current state-of-the-art method involves fine-tuning pre-trained language models for phrasal embeddings.
We propose an improved framework to learn phrase representations in a context-free fashion.
arXiv Detail & Related papers (2024-01-18T22:32:31Z) - Improving Fairness using Vision-Language Driven Image Augmentation [60.428157003498995]
Fairness is crucial when training a deep-learning discriminative model, especially in the facial domain.
Models tend to correlate specific characteristics (such as age and skin color) with unrelated attributes (downstream tasks)
This paper proposes a method to mitigate these correlations to improve fairness.
arXiv Detail & Related papers (2023-11-02T19:51:10Z) - How to Plant Trees in Language Models: Data and Architectural Effects on
the Emergence of Syntactic Inductive Biases [28.58785395946639]
We show that pre-training can teach language models to rely on hierarchical syntactic features when performing tasks after fine-tuning.
We focus on architectural features (depth, width, and number of parameters), as well as the genre and size of the pre-training corpus.
arXiv Detail & Related papers (2023-05-31T14:38:14Z) - L-CAD: Language-based Colorization with Any-level Descriptions using
Diffusion Priors [62.80068955192816]
We propose a unified model to perform language-based colorization with any-level descriptions.
We leverage the pretrained cross-modality generative model for its robust language understanding and rich color priors.
With the proposed novel sampling strategy, our model achieves instance-aware colorization in diverse and complex scenarios.
arXiv Detail & Related papers (2023-05-24T14:57:42Z) - Can Language Models Encode Perceptual Structure Without Grounding? A
Case Study in Color [18.573415435334105]
We employ a dataset of monolexemic color terms and color chips represented in CIELAB, a color space with a perceptually meaningful distance metric.
Using two methods of evaluating the structural alignment of colors in this space with text-derived color term representations, we find significant correspondence.
We find that warmer colors are, on average, better aligned to the perceptual color space than cooler ones.
arXiv Detail & Related papers (2021-09-13T17:09:40Z) - Sentiment analysis in tweets: an assessment study from classical to
modern text representation models [59.107260266206445]
Short texts published on Twitter have earned significant attention as a rich source of information.
Their inherent characteristics, such as the informal, and noisy linguistic style, remain challenging to many natural language processing (NLP) tasks.
This study fulfils an assessment of existing language models in distinguishing the sentiment expressed in tweets by using a rich collection of 22 datasets.
arXiv Detail & Related papers (2021-05-29T21:05:28Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - Grounded Compositional Outputs for Adaptive Language Modeling [59.02706635250856]
A language model's vocabulary$-$typically selected before training and permanently fixed later$-$affects its size.
We propose a fully compositional output embedding layer for language models.
To our knowledge, the result is the first word-level language model with a size that does not depend on the training vocabulary.
arXiv Detail & Related papers (2020-09-24T07:21:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.