A Preliminary Study for Literary Rhyme Generation based on Neuronal
Representation, Semantics and Shallow Parsing
- URL: http://arxiv.org/abs/2112.13241v1
- Date: Sat, 25 Dec 2021 14:40:09 GMT
- Title: A Preliminary Study for Literary Rhyme Generation based on Neuronal
Representation, Semantics and Shallow Parsing
- Authors: Luis-Gil Moreno-Jim\'enez, Juan-Manuel Torres-Moreno, Roseli S.
Wedemann
- Abstract summary: We introduce a model for the generation of literary rhymes in Spanish, combining structures of language and neural network models.
Results obtained with a manual evaluation of the texts generated by our algorithm are encouraging.
- Score: 1.7188280334580195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, researchers in the area of Computational Creativity have
studied the human creative process proposing different approaches to reproduce
it with a formal procedure. In this paper, we introduce a model for the
generation of literary rhymes in Spanish, combining structures of language and
neural network models %(\textit{Word2vec}).%, into a structure for semantic
assimilation. The results obtained with a manual evaluation of the texts
generated by our algorithm are encouraging.
Related papers
- Language Evolution with Deep Learning [49.879239655532324]
Computational modeling plays an essential role in the study of language emergence.
It aims to simulate the conditions and learning processes that could trigger the emergence of a structured language.
This chapter explores another class of computational models that have recently revolutionized the field of machine learning: deep learning models.
arXiv Detail & Related papers (2024-03-18T16:52:54Z) - Language Model Decoding as Direct Metrics Optimization [87.68281625776282]
Current decoding methods struggle to generate texts that align with human texts across different aspects.
In this work, we frame decoding from a language model as an optimization problem with the goal of strictly matching the expected performance with human texts.
We prove that this induced distribution is guaranteed to improve the perplexity on human texts, which suggests a better approximation to the underlying distribution of human texts.
arXiv Detail & Related papers (2023-10-02T09:35:27Z) - SciMON: Scientific Inspiration Machines Optimized for Novelty [68.46036589035539]
We explore and enhance the ability of neural language models to generate novel scientific directions grounded in literature.
We take a dramatic departure with a novel setting in which models use as input background contexts.
We present SciMON, a modeling framework that uses retrieval of "inspirations" from past scientific papers.
arXiv Detail & Related papers (2023-05-23T17:12:08Z) - A Survey of Text Representation Methods and Their Genealogy [0.0]
In recent years, with the advent of highly scalable artificial-neural-network-based text representation methods the field of natural language processing has seen unprecedented growth and sophistication.
We provide a survey of current approaches, by arranging them in a genealogy, and by conceptualizing a taxonomy of text representation methods to examine and explain the state-of-the-art.
arXiv Detail & Related papers (2022-11-26T15:22:01Z) - Neural Unsupervised Reconstruction of Protolanguage Word Forms [34.66200889614538]
We present a state-of-the-art neural approach to the unsupervised reconstruction of ancient word forms.
We extend this work with neural models that can capture more complicated phonological and morphological changes.
arXiv Detail & Related papers (2022-11-16T05:38:51Z) - A Latent-Variable Model for Intrinsic Probing [93.62808331764072]
We propose a novel latent-variable formulation for constructing intrinsic probes.
We find empirical evidence that pre-trained representations develop a cross-lingually entangled notion of morphosyntax.
arXiv Detail & Related papers (2022-01-20T15:01:12Z) - How much do language models copy from their training data? Evaluating
linguistic novelty in text generation using RAVEN [63.79300884115027]
Current language models can generate high-quality text.
Are they simply copying text they have seen before, or have they learned generalizable linguistic abstractions?
We introduce RAVEN, a suite of analyses for assessing the novelty of generated text.
arXiv Detail & Related papers (2021-11-18T04:07:09Z) - Text analysis and deep learning: A network approach [0.0]
We propose a novel method that combines transformer models with network analysis to form a self-referential representation of language use within a corpus of interest.
Our approach produces linguistic relations strongly consistent with the underlying model as well as mathematically well-defined operations on them.
It represents, to the best of our knowledge, the first unsupervised method to extract semantic networks directly from deep language models.
arXiv Detail & Related papers (2021-10-08T14:18:36Z) - Positioning yourself in the maze of Neural Text Generation: A
Task-Agnostic Survey [54.34370423151014]
This paper surveys the components of modeling approaches relaying task impacts across various generation tasks such as storytelling, summarization, translation etc.
We present an abstraction of the imperative techniques with respect to learning paradigms, pretraining, modeling approaches, decoding and the key challenges outstanding in the field in each of them.
arXiv Detail & Related papers (2020-10-14T17:54:42Z) - Analysing Lexical Semantic Change with Contextualised Word
Representations [7.071298726856781]
We propose a novel method that exploits the BERT neural language model to obtain representations of word usages.
We create a new evaluation dataset and show that the model representations and the detected semantic shifts are positively correlated with human judgements.
arXiv Detail & Related papers (2020-04-29T12:18:14Z) - Generaci\'on autom\'atica de frases literarias en espa\~nol [1.2998637003026272]
We address the automatic generation of literary sentences in Spanish.
We propose three models of text generation based mainly on statistical algorithms and shallow parsing analysis.
arXiv Detail & Related papers (2020-01-17T15:42:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.