A Survey in Automatic Irony Processing: Linguistic, Cognitive, and
Multi-X Perspectives
- URL: http://arxiv.org/abs/2209.04712v1
- Date: Sat, 10 Sep 2022 17:03:34 GMT
- Title: A Survey in Automatic Irony Processing: Linguistic, Cognitive, and
Multi-X Perspectives
- Authors: Qingcheng Zeng, An-Ran Li
- Abstract summary: We will provide a comprehensive overview of computational irony, insights from linguistic theory and cognitive science, as well as its interactions with downstream NLP tasks and newly proposed multi-X irony processing perspectives.
- Score: 1.6244541005112747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Irony is a ubiquitous figurative language in daily communication. Previously,
many researchers have approached irony from linguistic, cognitive science, and
computational aspects. Recently, some progress have been witnessed in automatic
irony processing due to the rapid development in deep neural models in natural
language processing (NLP). In this paper, we will provide a comprehensive
overview of computational irony, insights from linguistic theory and cognitive
science, as well as its interactions with downstream NLP tasks and newly
proposed multi-X irony processing perspectives.
Related papers
- Irony Detection, Reasoning and Understanding in Zero-shot Learning [0.5755004576310334]
Irony is a powerful figurative language (FL) on social media that can potentially mislead various NLP tasks.
Large language models, such as ChatGPT, are increasingly able to capture implicit and contextual information.
We propose a prompt engineering design framework IDADP to achieve higher irony detection accuracy, improved understanding of irony, and more effective explanations.
arXiv Detail & Related papers (2025-01-28T12:13:07Z) - A Survey on Lexical Ambiguity Detection and Word Sense Disambiguation [0.0]
This paper explores techniques that focus on understanding and resolving ambiguity in language within the field of natural language processing (NLP)
It outlines diverse approaches ranging from deep learning techniques to leveraging lexical resources and knowledge graphs like WordNet.
The research identifies persistent challenges in the field, such as the scarcity of sense annotated corpora and the complexity of informal clinical texts.
arXiv Detail & Related papers (2024-03-24T12:58:48Z) - Natural Language Processing for Dialects of a Language: A Survey [56.93337350526933]
State-of-the-art natural language processing (NLP) models are trained on massive training corpora, and report a superlative performance on evaluation datasets.
This survey delves into an important attribute of these datasets: the dialect of a language.
Motivated by the performance degradation of NLP models for dialectal datasets and its implications for the equity of language technologies, we survey past research in NLP for dialects in terms of datasets, and approaches.
arXiv Detail & Related papers (2024-01-11T03:04:38Z) - Visually Grounded Language Learning: a review of language games,
datasets, tasks, and models [60.2604624857992]
Many Vision+Language (V+L) tasks have been defined with the aim of creating models that can ground symbols in the visual modality.
In this work, we provide a systematic literature review of several tasks and models proposed in the V+L field.
arXiv Detail & Related papers (2023-12-05T02:17:29Z) - Analysis of the Evolution of Advanced Transformer-Based Language Models:
Experiments on Opinion Mining [0.5735035463793008]
This paper studies the behaviour of the cutting-edge Transformer-based language models on opinion mining.
Our comparative study shows leads and paves the way for production engineers regarding the approach to focus on.
arXiv Detail & Related papers (2023-08-07T01:10:50Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - Self-Supervised Speech Representation Learning: A Review [105.1545308184483]
Self-supervised representation learning methods promise a single universal model that would benefit a wide variety of tasks and domains.
Speech representation learning is experiencing similar progress in three main categories: generative, contrastive, and predictive methods.
This review presents approaches for self-supervised speech representation learning and their connection to other research areas.
arXiv Detail & Related papers (2022-05-21T16:52:57Z) - Visualizing and Explaining Language Models [0.0]
Natural Language Processing has become, after Computer Vision, the second field of Artificial Intelligence.
This paper showcases the techniques used in some of the most popular Deep Learning for NLP visualizations, with a special focus on interpretability and explainability.
arXiv Detail & Related papers (2022-04-30T17:23:33Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Crossing the Conversational Chasm: A Primer on Multilingual
Task-Oriented Dialogue Systems [51.328224222640614]
Current state-of-the-art ToD models based on large pretrained neural language models are data hungry.
Data acquisition for ToD use cases is expensive and tedious.
arXiv Detail & Related papers (2021-04-17T15:19:56Z) - Human Sentence Processing: Recurrence or Attention? [3.834032293147498]
Recently introduced Transformer architecture outperforms RNNs on many natural language processing tasks.
We compare Transformer- and RNN-based language models' ability to account for measures of human reading effort.
arXiv Detail & Related papers (2020-05-19T14:17:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.