Neural Abstractive Text Summarizer for Telugu Language
- URL: http://arxiv.org/abs/2101.07120v1
- Date: Mon, 18 Jan 2021 15:22:50 GMT
- Title: Neural Abstractive Text Summarizer for Telugu Language
- Authors: Mohan Bharath B, Aravindh Gowtham B, Akhil M
- Abstract summary: The proposed architecture is based on encoder-decoder sequential models with attention mechanism.
We have applied this model on manually created dataset to generate a one sentence summary of the source text.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Abstractive Text Summarization is the process of constructing semantically
relevant shorter sentences which captures the essence of the overall meaning of
the source text. It is actually difficult and very time consuming for humans to
summarize manually large documents of text. Much of work in abstractive text
summarization is being done in English and almost no significant work has been
reported in Telugu abstractive text summarization. So, we would like to propose
an abstractive text summarization approach for Telugu language using Deep
learning. In this paper we are proposing an abstractive text summarization Deep
learning model for Telugu language. The proposed architecture is based on
encoder-decoder sequential models with attention mechanism. We have applied
this model on manually created dataset to generate a one sentence summary of
the source text and have got good results measured qualitatively.
Related papers
- Factually Consistent Summarization via Reinforcement Learning with
Textual Entailment Feedback [57.816210168909286]
We leverage recent progress on textual entailment models to address this problem for abstractive summarization systems.
We use reinforcement learning with reference-free, textual entailment rewards to optimize for factual consistency.
Our results, according to both automatic metrics and human evaluation, show that our method considerably improves the faithfulness, salience, and conciseness of the generated summaries.
arXiv Detail & Related papers (2023-05-31T21:04:04Z) - Abstractive Summary Generation for the Urdu Language [1.9594639581421422]
We employ a transformer-based model that utilizes self-attention mechanisms to encode the input text and generate a summary.
Our experiments show that our model can produce summaries that are grammatically correct and semantically meaningful.
arXiv Detail & Related papers (2023-05-25T15:55:42Z) - Uzbek text summarization based on TF-IDF [0.0]
This article presents an experiment on summarization task for Uzbek language.
The methodology was based on text abstracting based on TF-IDF algorithm.
We summarize the given text by applying the n-gram method to important parts of the whole text.
arXiv Detail & Related papers (2023-03-01T12:39:46Z) - Generating Multiple-Length Summaries via Reinforcement Learning for
Unsupervised Sentence Summarization [44.835811239393244]
Sentence summarization shortens given texts while maintaining core contents of the texts.
Unsupervised approaches have been studied to summarize texts without human-written summaries.
We devise an abstractive model based on reinforcement learning without ground-truth summaries.
arXiv Detail & Related papers (2022-12-21T08:34:28Z) - A Survey on Neural Abstractive Summarization Methods and Factual
Consistency of Summarization [18.763290930749235]
summarization is the process of shortening a set of textual data computationally, to create a subset (a summary)
Existing summarization methods can be roughly divided into two types: extractive and abstractive.
An extractive summarizer explicitly selects text snippets from the source document, while an abstractive summarizer generates novel text snippets to convey the most salient concepts prevalent in the source.
arXiv Detail & Related papers (2022-04-20T14:56:36Z) - Topic Modeling Based Extractive Text Summarization [0.0]
We propose a novel method to summarize a text document by clustering its contents based on latent topics.
We utilize the lesser used and challenging WikiHow dataset in our approach to text summarization.
arXiv Detail & Related papers (2021-06-29T12:28:19Z) - Abstractive Summarization of Spoken and Written Instructions with BERT [66.14755043607776]
We present the first application of the BERTSum model to conversational language.
We generate abstractive summaries of narrated instructional videos across a wide variety of topics.
We envision this integrated as a feature in intelligent virtual assistants, enabling them to summarize both written and spoken instructional content upon request.
arXiv Detail & Related papers (2020-08-21T20:59:34Z) - From Standard Summarization to New Tasks and Beyond: Summarization with
Manifold Information [77.89755281215079]
Text summarization is the research area aiming at creating a short and condensed version of the original document.
In real-world applications, most of the data is not in a plain text format.
This paper focuses on the survey of these new summarization tasks and approaches in the real-world application.
arXiv Detail & Related papers (2020-05-10T14:59:36Z) - Few-Shot Learning for Opinion Summarization [117.70510762845338]
Opinion summarization is the automatic creation of text reflecting subjective information expressed in multiple documents.
In this work, we show that even a handful of summaries is sufficient to bootstrap generation of the summary text.
Our approach substantially outperforms previous extractive and abstractive methods in automatic and human evaluation.
arXiv Detail & Related papers (2020-04-30T15:37:38Z) - Extractive Summarization as Text Matching [123.09816729675838]
This paper creates a paradigm shift with regard to the way we build neural extractive summarization systems.
We formulate the extractive summarization task as a semantic text matching problem.
We have driven the state-of-the-art extractive result on CNN/DailyMail to a new level (44.41 in ROUGE-1)
arXiv Detail & Related papers (2020-04-19T08:27:57Z) - Amharic Abstractive Text Summarization [0.6703429330486277]
Text Summarization is the task of condensing long text into just a handful of sentences.
In this work we discuss one of these new novel approaches which combines curriculum learning with Deep Learning, this model is called Scheduled Sampling.
We apply this work to one of the most widely spoken African languages which is the Amharic Language, as we try to enrich the African NLP community with top-notch Deep Learning architectures.
arXiv Detail & Related papers (2020-03-30T18:15:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.