SKATE: A Natural Language Interface for Encoding Structured Knowledge
- URL: http://arxiv.org/abs/2010.10597v2
- Date: Fri, 11 Dec 2020 01:01:45 GMT
- Title: SKATE: A Natural Language Interface for Encoding Structured Knowledge
- Authors: Clifton McFate, Aditya Kalyanpur, Dave Ferrucci, Andrea Bradshaw,
Ariel Diertani, David Melville, Lori Moon
- Abstract summary: In Natural Language (NL) applications, there is often a mismatch between what the NL interface is capable of interpreting and what a lay user knows how to express.
This work describes a novel natural language interface that reduces this mismatch by refining natural language input through successive, automatically generated semi-structured templates.
- Score: 3.7296147370114183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Natural Language (NL) applications, there is often a mismatch between what
the NL interface is capable of interpreting and what a lay user knows how to
express. This work describes a novel natural language interface that reduces
this mismatch by refining natural language input through successive,
automatically generated semi-structured templates. In this paper we describe
how our approach, called SKATE, uses a neural semantic parser to parse NL input
and suggest semi-structured templates, which are recursively filled to produce
fully structured interpretations. We also show how SKATE integrates with a
neural rule-generation model to interactively suggest and acquire commonsense
knowledge. We provide a preliminary coverage analysis of SKATE for the task of
story understanding, and then describe a current business use-case of the tool
in a specific domain: COVID-19 policy design.
Related papers
- Prompt2DeModel: Declarative Neuro-Symbolic Modeling with Natural Language [18.00674366843745]
This paper presents a pipeline for crafting domain knowledge for complex neuro-symbolic models through natural language prompts.
Our proposed pipeline utilizes techniques like dynamic in-context demonstration retrieval, model refinement based on feedback from a symbolic visualization, and user interaction.
This approach empowers domain experts, even those not well-versed in ML/AI, to formally declare their knowledge to be incorporated in customized neural models.
arXiv Detail & Related papers (2024-07-30T03:10:30Z) - SLFNet: Generating Semantic Logic Forms from Natural Language Using Semantic Probability Graphs [6.689539418123863]
Building natural language interfaces typically uses a semanticSlot to parse the user's natural language and convert it into structured textbfSemantic textbfLogic textbfForms (SLFs)
We propose a novel neural network, SLFNet, which incorporates dependent syntactic information as prior knowledge and can capture the long-range interactions between contextual information and words.
Experiments show that SLFNet achieves state-of-the-art performance on the ChineseQCI-TS and Okapi datasets, and competitive performance on the ATIS dataset
arXiv Detail & Related papers (2024-03-29T02:42:39Z) - Physics of Language Models: Part 1, Learning Hierarchical Language Structures [51.68385617116854]
Transformer-based language models are effective but complex, and understanding their inner workings is a significant challenge.
We introduce a family of synthetic CFGs that produce hierarchical rules, capable of generating lengthy sentences.
We demonstrate that generative models like GPT can accurately learn this CFG language and generate sentences based on it.
arXiv Detail & Related papers (2023-05-23T04:28:16Z) - nl2spec: Interactively Translating Unstructured Natural Language to
Temporal Logics with Large Language Models [3.1143846686797314]
We present nl2spec, a framework for applying Large Language Models (LLMs) derive formal specifications from unstructured natural language.
We introduce a new methodology to detect and resolve the inherent ambiguity of system requirements in natural language.
Users iteratively add, delete, and edit these sub-translations to amend erroneous formalizations, which is easier than manually redrafting the entire formalization.
arXiv Detail & Related papers (2023-03-08T20:08:53Z) - Prompting Language Models for Linguistic Structure [73.11488464916668]
We present a structured prompting approach for linguistic structured prediction tasks.
We evaluate this approach on part-of-speech tagging, named entity recognition, and sentence chunking.
We find that while PLMs contain significant prior knowledge of task labels due to task leakage into the pretraining corpus, structured prompting can also retrieve linguistic structure with arbitrary labels.
arXiv Detail & Related papers (2022-11-15T01:13:39Z) - Autoregressive Structured Prediction with Language Models [73.11519625765301]
We describe an approach to model structures as sequences of actions in an autoregressive manner with PLMs.
Our approach achieves the new state-of-the-art on all the structured prediction tasks we looked at.
arXiv Detail & Related papers (2022-10-26T13:27:26Z) - Convex Polytope Modelling for Unsupervised Derivation of Semantic
Structure for Data-efficient Natural Language Understanding [31.888489552069146]
A Convex-Polytopic-Model-based framework shows great potential in automatically extracting semantic patterns by exploiting the raw dialog corpus.
We show that this framework can exploit semantic-frame-related features in the corpus, reveal the underlying semantic structure of the utterances, and boost the performance of the state-of-the-art NLU model with minimal supervision.
arXiv Detail & Related papers (2022-01-25T19:12:44Z) - Pre-training Language Model Incorporating Domain-specific Heterogeneous Knowledge into A Unified Representation [49.89831914386982]
We propose a unified pre-trained language model (PLM) for all forms of text, including unstructured text, semi-structured text, and well-structured text.
Our approach outperforms the pre-training of plain text using only 1/4 of the data.
arXiv Detail & Related papers (2021-09-02T16:05:24Z) - Contextual Biasing of Language Models for Speech Recognition in
Goal-Oriented Conversational Agents [11.193867567895353]
Goal-oriented conversational interfaces are designed to accomplish specific tasks.
We propose a new architecture that utilizes context embeddings derived from BERT on sample utterances provided during inference time.
Our experiments show a word error rate (WER) relative reduction of 7% over non-contextual utterance-level NLM rescorers on goal-oriented audio datasets.
arXiv Detail & Related papers (2021-03-18T15:38:08Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - Exploring Software Naturalness through Neural Language Models [56.1315223210742]
The Software Naturalness hypothesis argues that programming languages can be understood through the same techniques used in natural language processing.
We explore this hypothesis through the use of a pre-trained transformer-based language model to perform code analysis tasks.
arXiv Detail & Related papers (2020-06-22T21:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.