Exploration of the Usage of Color Terms by Color-blind Participants in
Online Discussion Platforms
- URL: http://arxiv.org/abs/2210.11905v1
- Date: Fri, 21 Oct 2022 12:11:10 GMT
- Title: Exploration of the Usage of Color Terms by Color-blind Participants in
Online Discussion Platforms
- Authors: Ella Rabinovich and Boaz Carmeli
- Abstract summary: We show that red-green color-blind speakers use the "red" and "green" color terms in less predictable contexts.
These findings shed some new and interesting light on the role of sensory experience on our linguistic system.
- Score: 4.445130093341008
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Prominent questions about the role of sensory vs. linguistic input in the way
we acquire and use language have been extensively studied in the
psycholinguistic literature. However, the relative effect of various factors in
a person's overall experience on their linguistic system remains unclear. We
study this question by making a step forward towards a better understanding of
the conceptual perception of colors by color-blind individuals, as reflected in
their spontaneous linguistic productions. Using a novel and carefully curated
dataset, we show that red-green color-blind speakers use the "red" and "green"
color terms in less predictable contexts, and in linguistic environments
evoking mental image to a lower extent, when compared to their normal-sighted
counterparts. These findings shed some new and interesting light on the role of
sensory experience on our linguistic system.
Related papers
- Perceptual Structure in the Absence of Grounding for LLMs: The Impact of
Abstractedness and Subjectivity in Color Language [2.6094835036012864]
We show that there is considerable alignment between a defined color space and the feature space defined by a language model.
Our results show that while color space alignment holds for monolexemic, highly pragmatic color descriptions, this alignment drops considerably in the presence of examples that exhibit elements of real linguistic usage.
arXiv Detail & Related papers (2023-11-22T02:12:36Z) - Natural Language Decompositions of Implicit Content Enable Better Text
Representations [56.85319224208865]
We introduce a method for the analysis of text that takes implicitly communicated content explicitly into account.
We use a large language model to produce sets of propositions that are inferentially related to the text that has been observed.
Our results suggest that modeling the meanings behind observed language, rather than the literal text alone, is a valuable direction for NLP.
arXiv Detail & Related papers (2023-05-23T23:45:20Z) - On Human Visual Contrast Sensitivity and Machine Vision Robustness: A
Comparative Study [68.41864523774164]
How color differences affect machine vision has not been well explored.
Our work tries to bridge this gap between the human color vision aspect of visual recognition and that of the machine.
We devise a new framework in two dimensions to perform extensive analyses on the effect of color contrast and corrupted images.
arXiv Detail & Related papers (2022-12-16T18:51:41Z) - Visual Superordinate Abstraction for Robust Concept Learning [80.15940996821541]
Concept learning constructs visual representations that are connected to linguistic semantics.
We ascribe the bottleneck to a failure of exploring the intrinsic semantic hierarchy of visual concepts.
We propose a visual superordinate abstraction framework for explicitly modeling semantic-aware visual subspaces.
arXiv Detail & Related papers (2022-05-28T14:27:38Z) - Color Overmodification Emerges from Data-Driven Learning and Pragmatic
Reasoning [53.088796874029974]
We show that speakers' referential expressions depart from communicative ideals in ways that help illuminate the nature of pragmatic language use.
By adopting neural networks as learning agents, we show that overmodification is more likely with environmental features that are infrequent or salient.
arXiv Detail & Related papers (2022-05-18T18:42:43Z) - Exploring the Sensory Spaces of English Perceptual Verbs in Natural
Language Data [0.40611352512781856]
We focus on the most frequent perception verbs of English analyzed from an and Agentive vs. Experiential distinction.
In this study we report on a data-driven approach based on distributional-semantic word embeddings and clustering models.
arXiv Detail & Related papers (2021-10-19T03:58:44Z) - Perception Point: Identifying Critical Learning Periods in Speech for
Bilingual Networks [58.24134321728942]
We compare and identify cognitive aspects on deep neural-based visual lip-reading models.
We observe a strong correlation between these theories in cognitive psychology and our unique modeling.
arXiv Detail & Related papers (2021-10-13T05:30:50Z) - Can Language Models Encode Perceptual Structure Without Grounding? A
Case Study in Color [18.573415435334105]
We employ a dataset of monolexemic color terms and color chips represented in CIELAB, a color space with a perceptually meaningful distance metric.
Using two methods of evaluating the structural alignment of colors in this space with text-derived color term representations, we find significant correspondence.
We find that warmer colors are, on average, better aligned to the perceptual color space than cooler ones.
arXiv Detail & Related papers (2021-09-13T17:09:40Z) - Semantic-driven Colorization [78.88814849391352]
Recent colorization works implicitly predict the semantic information while learning to colorize black-and-white images.
In this study, we simulate that human-like action to let our network first learn to understand the photo, then colorize it.
arXiv Detail & Related papers (2020-06-13T08:13:30Z) - Emosaic: Visualizing Affective Content of Text at Varying Granularity [0.0]
Emosaic is a tool for visualizing the emotional tone of text documents.
We capitalize on an established three-dimensional model of human emotion.
arXiv Detail & Related papers (2020-02-24T07:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.