SLFNet: Generating Semantic Logic Forms from Natural Language Using Semantic Probability Graphs
- URL: http://arxiv.org/abs/2403.19936v1
- Date: Fri, 29 Mar 2024 02:42:39 GMT
- Title: SLFNet: Generating Semantic Logic Forms from Natural Language Using Semantic Probability Graphs
- Authors: Hao Wu, Fan Xu,
- Abstract summary: Building natural language interfaces typically uses a semanticSlot to parse the user's natural language and convert it into structured textbfSemantic textbfLogic textbfForms (SLFs)
We propose a novel neural network, SLFNet, which incorporates dependent syntactic information as prior knowledge and can capture the long-range interactions between contextual information and words.
Experiments show that SLFNet achieves state-of-the-art performance on the ChineseQCI-TS and Okapi datasets, and competitive performance on the ATIS dataset
- Score: 6.689539418123863
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Building natural language interfaces typically uses a semantic parser to parse the user's natural language and convert it into structured \textbf{S}emantic \textbf{L}ogic \textbf{F}orms (SLFs). The mainstream approach is to adopt a sequence-to-sequence framework, which requires that natural language commands and SLFs must be represented serially. Since a single natural language may have multiple SLFs or multiple natural language commands may have the same SLF, training a sequence-to-sequence model is sensitive to the choice among them, a phenomenon recorded as "order matters". To solve this problem, we propose a novel neural network, SLFNet, which firstly incorporates dependent syntactic information as prior knowledge and can capture the long-range interactions between contextual information and words. Secondly construct semantic probability graphs to obtain local dependencies between predictor variables. Finally we propose the Multi-Head SLF Attention mechanism to synthesize SLFs from natural language commands based on Sequence-to-Slots. Experiments show that SLFNet achieves state-of-the-art performance on the ChineseQCI-TS and Okapi datasets, and competitive performance on the ATIS dataset.
Related papers
- Compositional Program Generation for Few-Shot Systematic Generalization [59.57656559816271]
This study on a neuro-symbolic architecture called the Compositional Program Generator (CPG)
CPG has three key features: textitmodularity, textitcomposition, and textitabstraction, in the form of grammar rules.
It perfect achieves generalization on both the SCAN and COGS benchmarks using just 14 examples for SCAN and 22 examples for COGS.
arXiv Detail & Related papers (2023-09-28T14:33:20Z) - Grammar Prompting for Domain-Specific Language Generation with Large
Language Models [40.831045850285776]
Large language models (LLMs) can learn to perform a wide range of natural language tasks from just a handful of in-context examples.
We propose emphgrammar prompting, a simple approach to enable LLMs to use external knowledge and domain-specific constraints.
arXiv Detail & Related papers (2023-05-30T17:26:01Z) - Synchromesh: Reliable code generation from pre-trained language models [38.15391794443022]
We propose Synchromesh: a framework for substantially improving the reliability of pre-trained models for code generation.
First, it retrieves few-shot examples from a training bank using Target Similarity Tuning (TST), a novel method for semantic example selection.
Then, Synchromesh feeds the examples to a pre-trained language model and samples programs using Constrained Semantic Decoding (CSD), a general framework for constraining the output to a set of valid programs in the target language.
arXiv Detail & Related papers (2022-01-26T22:57:44Z) - Sequence-to-Sequence Learning with Latent Neural Grammars [12.624691611049341]
Sequence-to-sequence learning with neural networks has become the de facto standard for sequence prediction tasks.
While flexible and performant, these models often require large datasets for training and can fail spectacularly on benchmarks designed to test for compositional generalization.
This work explores an alternative, hierarchical approach to sequence-to-sequence learning with quasi-synchronous grammars.
arXiv Detail & Related papers (2021-09-02T17:58:08Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - GATE: Graph Attention Transformer Encoder for Cross-lingual Relation and
Event Extraction [107.8262586956778]
We introduce graph convolutional networks (GCNs) with universal dependency parses to learn language-agnostic sentence representations.
GCNs struggle to model words with long-range dependencies or are not directly connected in the dependency tree.
We propose to utilize the self-attention mechanism to learn the dependencies between words with different syntactic distances.
arXiv Detail & Related papers (2020-10-06T20:30:35Z) - Exploring Software Naturalness through Neural Language Models [56.1315223210742]
The Software Naturalness hypothesis argues that programming languages can be understood through the same techniques used in natural language processing.
We explore this hypothesis through the use of a pre-trained transformer-based language model to perform code analysis tasks.
arXiv Detail & Related papers (2020-06-22T21:56:14Z) - Investigation of learning abilities on linguistic features in
sequence-to-sequence text-to-speech synthesis [48.151894340550385]
Neural sequence-to-sequence text-to-speech synthesis (TTS) can produce high-quality speech directly from text or simple linguistic features such as phonemes.
We investigate under what conditions the neural sequence-to-sequence TTS can work well in Japanese and English.
arXiv Detail & Related papers (2020-05-20T23:26:14Z) - Logical Natural Language Generation from Open-Domain Tables [107.04385677577862]
We propose a new task where a model is tasked with generating natural language statements that can be emphlogically entailed by the facts.
To facilitate the study of the proposed logical NLG problem, we use the existing TabFact dataset citechen 2019tabfact featured with a wide range of logical/symbolic inferences.
The new task poses challenges to the existing monotonic generation frameworks due to the mismatch between sequence order and logical order.
arXiv Detail & Related papers (2020-04-22T06:03:10Z) - Exploring Neural Models for Parsing Natural Language into First-Order
Logic [10.62143644603835]
We study the capability of neural models in parsing English sentences to First-Order Logic (FOL)
We model FOL parsing as a sequence to sequence mapping task where given a natural language sentence, it is encoded into an intermediate representation using an LSTM followed by a decoder which sequentially generates the predicates in the corresponding FOL formula.
arXiv Detail & Related papers (2020-02-16T09:22:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.