A Neural-Symbolic Approach Towards Identifying Grammatically Correct
Sentences
- URL: http://arxiv.org/abs/2307.08036v1
- Date: Sun, 16 Jul 2023 13:21:44 GMT
- Title: A Neural-Symbolic Approach Towards Identifying Grammatically Correct
Sentences
- Authors: Nicos Isaak
- Abstract summary: It is commonly accepted that it is crucial to have access to well-written text from valid sources to tackle challenges like text summarization, question-answering, machine translation, or even pronoun resolution.
We present a simplified way to validate English sentences through a novel neural-symbolic approach.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Textual content around us is growing on a daily basis. Numerous articles are
being written as we speak on online newspapers, blogs, or social media.
Similarly, recent advances in the AI field, like language models or traditional
classic AI approaches, are utilizing all the above to improve their learned
representation to tackle NLP challenges with human-like accuracy. It is
commonly accepted that it is crucial to have access to well-written text from
valid sources to tackle challenges like text summarization, question-answering,
machine translation, or even pronoun resolution. For instance, to summarize
well, one needs to select the most important sentences in order to concatenate
them to form the summary. However, what happens if we do not have access to
well-formed English sentences or even non-valid sentences? Despite the
importance of having access to well-written sentences, figuring out ways to
validate them is still an open area of research. To address this problem, we
present a simplified way to validate English sentences through a novel
neural-symbolic approach. Lately, neural-symbolic approaches have triggered an
increasing interest towards tackling various NLP challenges, as they are
demonstrating their effectiveness as a central component in various AI systems.
Through combining Classic with Modern AI, which involves the blending of
grammatical and syntactical rules with language models, we effectively tackle
the Corpus of Linguistic Acceptability (COLA), a task that shows whether or not
a sequence of words is an English grammatical sentence. Among others,
undertaken experiments effectively show that blending symbolic and non-symbolic
systems helps the former provide insights about the latter's accuracy results.
Related papers
- Pixel Sentence Representation Learning [67.4775296225521]
In this work, we conceptualize the learning of sentence-level textual semantics as a visual representation learning process.
We employ visually-grounded text perturbation methods like typos and word order shuffling, resonating with human cognitive patterns, and enabling perturbation to be perceived as continuous.
Our approach is further bolstered by large-scale unsupervised topical alignment training and natural language inference supervision.
arXiv Detail & Related papers (2024-02-13T02:46:45Z) - A Comprehensive Survey of Sentence Representations: From the BERT Epoch
to the ChatGPT Era and Beyond [45.455178613559006]
Sentence representations are a critical component in NLP applications such as retrieval, question answering, and text classification.
They capture the meaning of a sentence, enabling machines to understand and reason over human language.
There is no literature review on sentence representations till now.
arXiv Detail & Related papers (2023-05-22T02:31:15Z) - Lexical Complexity Prediction: An Overview [13.224233182417636]
The occurrence of unknown words in texts significantly hinders reading comprehension.
computational modelling has been applied to identify complex words in texts and substitute them for simpler alternatives.
We present an overview of computational approaches to lexical complexity prediction focusing on the work carried out on English data.
arXiv Detail & Related papers (2023-03-08T19:35:08Z) - Revisiting the Roles of "Text" in Text Games [102.22750109468652]
This paper investigates the roles of text in the face of different reinforcement learning challenges.
We propose a simple scheme to extract relevant contextual information into an approximate state hash.
Such a lightweight plug-in achieves competitive performance with state-of-the-art text agents.
arXiv Detail & Related papers (2022-10-15T21:52:39Z) - A New Sentence Ordering Method Using BERT Pretrained Model [2.1793134762413433]
We propose a method for sentence ordering which does not need a training phase and consequently a large corpus for learning.
Our proposed method outperformed other baselines on ROCStories, a corpus of 5-sentence human-made stories.
Among other advantages of this method are its interpretability and needlessness to linguistic knowledge.
arXiv Detail & Related papers (2021-08-26T18:47:15Z) - Narrative Incoherence Detection [76.43894977558811]
We propose the task of narrative incoherence detection as a new arena for inter-sentential semantic understanding.
Given a multi-sentence narrative, decide whether there exist any semantic discrepancies in the narrative flow.
arXiv Detail & Related papers (2020-12-21T07:18:08Z) - Learning Adaptive Language Interfaces through Decomposition [89.21937539950966]
We introduce a neural semantic parsing system that learns new high-level abstractions through decomposition.
Users interactively teach the system by breaking down high-level utterances describing novel behavior into low-level steps.
arXiv Detail & Related papers (2020-10-11T08:27:07Z) - Interactive Fiction Game Playing as Multi-Paragraph Reading
Comprehension with Reinforcement Learning [94.50608198582636]
Interactive Fiction (IF) games with real human-written natural language texts provide a new natural evaluation for language understanding techniques.
We take a novel perspective of IF game solving and re-formulate it as Multi-Passage Reading (MPRC) tasks.
arXiv Detail & Related papers (2020-10-05T23:09:20Z) - Abstractive Summarization of Spoken and Written Instructions with BERT [66.14755043607776]
We present the first application of the BERTSum model to conversational language.
We generate abstractive summaries of narrated instructional videos across a wide variety of topics.
We envision this integrated as a feature in intelligent virtual assistants, enabling them to summarize both written and spoken instructional content upon request.
arXiv Detail & Related papers (2020-08-21T20:59:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.