SLAyiNG: Towards Queer Language Processing
- URL: http://arxiv.org/abs/2509.17449v1
- Date: Mon, 22 Sep 2025 07:41:45 GMT
- Title: SLAyiNG: Towards Queer Language Processing
- Authors: Leonor Veloso, Lea Hirlimann, Philipp Wicke, Hinrich Schütze,
- Abstract summary: SLAyiNG is the first dataset containing annotated queer slang derived from subtitles, social media posts, and podcasts.<n>We describe our data curation process, including the collection of slang terms and definitions, scraping sources for examples that reflect usage of these terms.<n>As preliminary results, we calculate inter-annotator agreement for human annotators and OpenAI's model o3-mini.
- Score: 44.4984082814346
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge of slang is a desirable feature of LLMs in the context of user interaction, as slang often reflects an individual's social identity. Several works on informal language processing have defined and curated benchmarks for tasks such as detection and identification of slang. In this paper, we focus on queer slang. Queer slang can be mistakenly flagged as hate speech or can evoke negative responses from LLMs during user interaction. Research efforts so far have not focused explicitly on queer slang. In particular, detection and processing of queer slang have not been thoroughly evaluated due to the lack of a high-quality annotated benchmark. To address this gap, we curate SLAyiNG, the first dataset containing annotated queer slang derived from subtitles, social media posts, and podcasts, reflecting real-world usage. We describe our data curation process, including the collection of slang terms and definitions, scraping sources for examples that reflect usage of these terms, and our ongoing annotation process. As preliminary results, we calculate inter-annotator agreement for human annotators and OpenAI's model o3-mini, evaluating performance on the task of sense disambiguation. Reaching an average Krippendorff's alpha of 0.746, we argue that state-of-the-art reasoning models can serve as tools for pre-filtering, but the complex and often sensitive nature of queer language data requires expert and community-driven annotation efforts.
Related papers
- AIWizards at MULTIPRIDE: A Hierarchical Approach to Slur Reclamation Detection [0.42970700836450487]
We propose a hierarchical approach to modeling the slur reclamation process.<n>Our core assumption is that members of the LGBTQ+ community are more likely to employ certain slurs in a eclamatory manner.<n> Experimental results on Italian and Spanish show that our approach performs statistically comparable to a strong BERT-based baseline.
arXiv Detail & Related papers (2026-02-13T11:01:19Z) - How do Language Models Generate Slang: A Systematic Comparison between Human and Machine-Generated Slang Usages [2.887631096209473]
Slang is a commonly used type of informal language that poses a daunting challenge to NLP systems.<n>Recent advances in large language models (LLMs) have made the problem more approachable.<n>We compare human-attested slang usages from the Online Slang Dictionary (OSD) and slang generated by GPT-4o and Llama-3.
arXiv Detail & Related papers (2025-09-19T01:49:27Z) - SlangDIT: Benchmarking LLMs in Interpretative Slang Translation [89.48208612476068]
This paper introduces the interpretative slang translation task (named SlangDIT)<n>It consists of three sub-tasks: slang detection, cross-lingual slang explanation, and slang translation within the current context.<n>Based on the benchmark, we propose a deep thinking model, named SlangOWL. It firstly identifies whether the sentence contains a slang, and then judges whether the slang is polysemous and analyze its possible meaning.
arXiv Detail & Related papers (2025-05-20T10:37:34Z) - ImpScore: A Learnable Metric For Quantifying The Implicitness Level of Sentence [40.4052848203136]
Implicit language is essential for natural language processing systems to achieve precise text understanding and facilitate natural interactions with users.<n>This paper develops a scalar metric that quantifies the implicitness level of language without relying on external references.<n>We validate ImpScore through a user study that compares its assessments with human evaluations on out-of-distribution data.
arXiv Detail & Related papers (2024-11-07T20:23:29Z) - Using Natural Language Explanations to Rescale Human Judgments [81.66697572357477]
We propose a method to rescale ordinal annotations and explanations using large language models (LLMs)<n>We feed annotators' Likert ratings and corresponding explanations into an LLM and prompt it to produce a numeric score anchored in a scoring rubric.<n>Our method rescales the raw judgments without impacting agreement and brings the scores closer to human judgments grounded in the same scoring rubric.
arXiv Detail & Related papers (2023-05-24T06:19:14Z) - We're Afraid Language Models Aren't Modeling Ambiguity [136.8068419824318]
Managing ambiguity is a key part of human language understanding.
We characterize ambiguity in a sentence by its effect on entailment relations with another sentence.
We show that a multilabel NLI model can flag political claims in the wild that are misleading due to ambiguity.
arXiv Detail & Related papers (2023-04-27T17:57:58Z) - Semantic Parsing for Conversational Question Answering over Knowledge
Graphs [63.939700311269156]
We develop a dataset where user questions are annotated with Sparql parses and system answers correspond to execution results thereof.
We present two different semantic parsing approaches and highlight the challenges of the task.
Our dataset and models are released at https://github.com/Edinburgh/SPICE.
arXiv Detail & Related papers (2023-01-28T14:45:11Z) - A Study of Slang Representation Methods [3.511369967593153]
We study different combinations of representation learning models and knowledge resources for a variety of downstream tasks that rely on slang understanding.
Our error analysis identifies core challenges for slang representation learning, including out-of-vocabulary words, polysemy, variance, and annotation disagreements.
arXiv Detail & Related papers (2022-12-11T21:56:44Z) - Words aren't enough, their order matters: On the Robustness of Grounding
Visual Referring Expressions [87.33156149634392]
We critically examine RefCOg, a standard benchmark for visual referring expression recognition.
We show that 83.7% of test instances do not require reasoning on linguistic structure.
We propose two methods, one based on contrastive learning and the other based on multi-task learning, to increase the robustness of ViLBERT.
arXiv Detail & Related papers (2020-05-04T17:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.