A Combined Encoder and Transformer Approach for Coherent and High-Quality Text Generation
- URL: http://arxiv.org/abs/2411.12157v1
- Date: Tue, 19 Nov 2024 01:41:56 GMT
- Title: A Combined Encoder and Transformer Approach for Coherent and High-Quality Text Generation
- Authors: Jiajing Chen, Shuo Wang, Zhen Qi, Zhenhong Zhang, Chihang Wang, Hongye Zheng,
- Abstract summary: This research introduces a novel text generation model that combines BERT's semantic interpretation strengths with GPT-4's generative capabilities.
The model enhances semantic depth and maintains smooth, human-like text flow, overcoming limitations seen in prior models.
- Score: 5.930799903736776
- License:
- Abstract: This research introduces a novel text generation model that combines BERT's semantic interpretation strengths with GPT-4's generative capabilities, establishing a high standard in generating coherent, contextually accurate language. Through the combined architecture, the model enhances semantic depth and maintains smooth, human-like text flow, overcoming limitations seen in prior models. Experimental benchmarks reveal that BERT-GPT-4 surpasses traditional models, including GPT-3, T5, BART, Transformer-XL, and CTRL, in key metrics like Perplexity and BLEU, showcasing its superior natural language generation performance. By fully utilizing contextual information, this hybrid model generates text that is not only logically coherent but also aligns closely with human language patterns, providing an advanced solution for text generation tasks. This research highlights the potential of integrating semantic understanding with advanced generative models, contributing new insights for NLP, and setting a foundation for broader applications of large-scale generative architectures in areas such as automated writing, question-answer systems, and adaptive conversational agents.
Related papers
- Automatic and Human-AI Interactive Text Generation [27.05024520190722]
This tutorial aims to provide an overview of the state-of-the-art natural language generation research.
Text-to-text generation tasks are more constrained in terms of semantic consistency and targeted language styles.
arXiv Detail & Related papers (2023-10-05T20:26:15Z) - RenAIssance: A Survey into AI Text-to-Image Generation in the Era of
Large Model [93.8067369210696]
Text-to-image generation (TTI) refers to the usage of models that could process text input and generate high fidelity images based on text descriptions.
Diffusion models are one prominent type of generative model used for the generation of images through the systematic introduction of noises with repeating steps.
In the era of large models, scaling up model size and the integration with large language models have further improved the performance of TTI models.
arXiv Detail & Related papers (2023-09-02T03:27:20Z) - PatternGPT :A Pattern-Driven Framework for Large Language Model Text
Generation [1.7259824817932292]
This paper proposes PatternGPT, a pattern-driven text generation framework for Large Language Models.
The framework utilizes the extraction capability of Large Language Models to generate rich and diversified structured and formalized patterns.
external knowledge such as judgment criteria and optimization algorithms are used to search for high-quality patterns.
arXiv Detail & Related papers (2023-07-02T04:32:41Z) - An Overview on Controllable Text Generation via Variational
Auto-Encoders [15.97186478109836]
Recent advances in neural-based generative modeling have reignited the hopes of having computer systems capable of conversing with humans.
Latent variable models (LVM) such as variational auto-encoders (VAEs) are designed to characterize the distributional pattern of textual data.
This overview gives an introduction to existing generation schemes, problems associated with text variational auto-encoders, and a review of several applications about the controllable generation.
arXiv Detail & Related papers (2022-11-15T07:36:11Z) - Informative Text Generation from Knowledge Triples [56.939571343797304]
We propose a novel memory augmented generator that employs a memory network to memorize the useful knowledge learned during the training.
We derive a dataset from WebNLG for our new setting and conduct extensive experiments to investigate the effectiveness of our model.
arXiv Detail & Related papers (2022-09-26T14:35:57Z) - How much do language models copy from their training data? Evaluating
linguistic novelty in text generation using RAVEN [63.79300884115027]
Current language models can generate high-quality text.
Are they simply copying text they have seen before, or have they learned generalizable linguistic abstractions?
We introduce RAVEN, a suite of analyses for assessing the novelty of generated text.
arXiv Detail & Related papers (2021-11-18T04:07:09Z) - Pre-training Language Model Incorporating Domain-specific Heterogeneous Knowledge into A Unified Representation [49.89831914386982]
We propose a unified pre-trained language model (PLM) for all forms of text, including unstructured text, semi-structured text, and well-structured text.
Our approach outperforms the pre-training of plain text using only 1/4 of the data.
arXiv Detail & Related papers (2021-09-02T16:05:24Z) - OptAGAN: Entropy-based finetuning on text VAE-GAN [1.941730292017383]
Recently Optimus, a variational autoencoder (VAE) has been released.
It combines two pre-trained models, BERT and GPT-2.
It has been shown to produce novel, yet very human-looking text.
arXiv Detail & Related papers (2021-09-01T08:23:19Z) - Knowledge-based Review Generation by Coherence Enhanced Text Planning [45.473253542837995]
We propose a novel Coherence Enhanced Text Planning model (CETP) based on knowledge graphs (KGs) to improve both global and local coherence for review generation.
For global coherence, we design a hierarchical self-attentive architecture with both subgraph- and node-level attention to enhance the correlations between subgraphs.
Experiments on three datasets confirm the effectiveness of our model on improving the content coherence of generated texts.
arXiv Detail & Related papers (2021-05-09T02:12:05Z) - Controllable Text Generation with Focused Variation [71.07811310799664]
Focused-Variation Network (FVN) is a novel model to control language generation.
FVN learns disjoint discrete latent spaces for each attribute inside codebooks, which allows for both controllability and diversity.
We evaluate FVN on two text generation datasets with annotated content and style, and show state-of-the-art performance as assessed by automatic and human evaluations.
arXiv Detail & Related papers (2020-09-25T06:31:06Z) - Robust Conversational AI with Grounded Text Generation [77.56950706340767]
GTG is a hybrid model which uses a large-scale Transformer neural network as its backbone.
It generates responses grounded in dialog belief state and real-world knowledge for task completion.
arXiv Detail & Related papers (2020-09-07T23:49:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.