THEaiTRE: Artificial Intelligence to Write a Theatre Play
- URL: http://arxiv.org/abs/2006.14668v1
- Date: Thu, 25 Jun 2020 19:24:57 GMT
- Title: THEaiTRE: Artificial Intelligence to Write a Theatre Play
- Authors: Rudolf Rosa, Ond\v{r}ej Du\v{s}ek, Tom Kocmi, David Mare\v{c}ek,
Tom\'a\v{s} Musil, Patr\'icia Schmidtov\'a, Dominik Jurko, Ond\v{r}ej Bojar,
Daniel Hrbek, David Ko\v{s}\v{t}\'ak, Martina Kinsk\'a, Josef Dole\v{z}al and
Kl\'ara Voseck\'a
- Abstract summary: THEaiTRE is a project aimed at automatic generation of theatre play scripts.
We plan to adopt generative neural language models and hierarchical generation approaches, supported by summarization and machine translation methods.
- Score: 4.450488404542801
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present THEaiTRE, a starting project aimed at automatic generation of
theatre play scripts. This paper reviews related work and drafts an approach we
intend to follow. We plan to adopt generative neural language models and
hierarchical generation approaches, supported by summarization and machine
translation methods, and complemented with a human-in-the-loop approach.
Related papers
- Design Techniques for LLM-Powered Interactive Storytelling: A Case Study of the Dramamancer System [16.029360256664685]
Dramamancer is a system that transforms author-created story schemas into player-driven playthroughs.<n>This extended abstract outlines some design techniques and evaluation considerations associated with this system.
arXiv Detail & Related papers (2026-01-26T18:51:20Z) - The Script is All You Need: An Agentic Framework for Long-Horizon Dialogue-to-Cinematic Video Generation [95.18045807704284]
We introduce an end-to-end agentic framework for dialogue-to-cinematic-video generation.<n> ScripterAgent is trained to translate coarse dialogue into a fine-grained, executable cinematic script.<n>Our framework significantly improves script faithfulness and temporal fidelity across all tested video models.
arXiv Detail & Related papers (2026-01-25T08:10:28Z) - The Art of Storytelling: Multi-Agent Generative AI for Dynamic Multimodal Narratives [3.5001789247699535]
This paper introduces the concept of an education tool that utilizes Generative Artificial Intelligence (GenAI) to enhance storytelling for children.
The system combines GenAI-driven narrative co-creation, text-to-speech conversion, and text-to-video generation to produce an engaging experience for learners.
arXiv Detail & Related papers (2024-09-17T15:10:23Z) - Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn Planner [51.77263363285369]
We present an approach called Dialogue Action Tokens that adapts language model agents to plan goal-directed dialogues.
The core idea is to treat each utterance as an action, thereby converting dialogues into games where existing approaches such as reinforcement learning can be applied.
arXiv Detail & Related papers (2024-06-17T18:01:32Z) - Generative Artificial Intelligence: A Systematic Review and Applications [7.729155237285151]
This paper documents the systematic review and analysis of recent advancements and techniques in Generative AI.
The major impact that generative AI has made to date, has been in language generation with the development of large language models.
The paper ends with a discussion of Responsible AI principles, and the necessary ethical considerations for the sustainability and growth of these generative models.
arXiv Detail & Related papers (2024-05-17T18:03:59Z) - Learning Universal Policies via Text-Guided Video Generation [179.6347119101618]
A goal of artificial intelligence is to construct an agent that can solve a wide variety of tasks.
Recent progress in text-guided image synthesis has yielded models with an impressive ability to generate complex novel images.
We investigate whether such tools can be used to construct more general-purpose agents.
arXiv Detail & Related papers (2023-01-31T21:28:13Z) - Channel-aware Decoupling Network for Multi-turn Dialogue Comprehension [81.47133615169203]
We propose compositional learning for holistic interaction across utterances beyond the sequential contextualization from PrLMs.
We employ domain-adaptive training strategies to help the model adapt to the dialogue domains.
Experimental results show that our method substantially boosts the strong PrLM baselines in four public benchmark datasets.
arXiv Detail & Related papers (2023-01-10T13:18:25Z) - Structured Like a Language Model: Analysing AI as an Automated Subject [0.0]
We argue the intentional fictional projection of subjectivity onto large language models can yield an alternate frame through which AI behaviour can be analysed.
We trace a brief history of language models, culminating in the releases of systems that realise state-of-the-art natural language processing performance.
We conclude that critical media methods and psychoanalytic theory together offer a productive frame for grasping the powerful new capacities of AI-driven language systems.
arXiv Detail & Related papers (2022-12-08T21:58:43Z) - Automated Audio Captioning: an Overview of Recent Progress and New
Challenges [56.98522404673527]
Automated audio captioning is a cross-modal translation task that aims to generate natural language descriptions for given audio clips.
We present a comprehensive review of the published contributions in automated audio captioning, from a variety of existing approaches to evaluation metrics and datasets.
arXiv Detail & Related papers (2022-05-12T08:36:35Z) - A Preliminary Study for Literary Rhyme Generation based on Neuronal
Representation, Semantics and Shallow Parsing [1.7188280334580195]
We introduce a model for the generation of literary rhymes in Spanish, combining structures of language and neural network models.
Results obtained with a manual evaluation of the texts generated by our algorithm are encouraging.
arXiv Detail & Related papers (2021-12-25T14:40:09Z) - Improving Adversarial Text Generation by Modeling the Distant Future [155.83051741029732]
We consider a text planning scheme and present a model-based imitation-learning approach to alleviate the aforementioned issues.
We propose a novel guider network to focus on the generative process over a longer horizon, which can assist next-word prediction and provide intermediate rewards for generator optimization.
arXiv Detail & Related papers (2020-05-04T05:45:13Z) - QURIOUS: Question Generation Pretraining for Text Generation [13.595014409069584]
We propose question generation as a pretraining method, which better aligns with the text generation objectives.
Our text generation models pretrained with this method are better at understanding the essence of the input and are better language models for the target task.
arXiv Detail & Related papers (2020-04-23T08:41:52Z) - PALM: Pre-training an Autoencoding&Autoregressive Language Model for
Context-conditioned Generation [92.7366819044397]
Self-supervised pre-training has emerged as a powerful technique for natural language understanding and generation.
This work presents PALM with a novel scheme that jointly pre-trains an autoencoding and autoregressive language model on a large unlabeled corpus.
An extensive set of experiments show that PALM achieves new state-of-the-art results on a variety of language generation benchmarks.
arXiv Detail & Related papers (2020-04-14T06:25:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.