Don't Parse, Generate! A Sequence to Sequence Architecture for
Task-Oriented Semantic Parsing
- URL: http://arxiv.org/abs/2001.11458v1
- Date: Thu, 30 Jan 2020 17:11:00 GMT
- Title: Don't Parse, Generate! A Sequence to Sequence Architecture for
Task-Oriented Semantic Parsing
- Authors: Subendhu Rongali (University of Massachusetts Amherst), Luca Soldaini
(Amazon Alexa Search), Emilio Monti (Amazon Alexa), Wael Hamza (Amazon Alexa
AI)
- Abstract summary: Virtual assistants such as Amazon Alexa, Apple Siri, and Google Assistant often rely on a semantic parsing component to understand which action(s) to execute for an utterance spoken by its users.
We propose a unified architecture based on Sequence to Sequence models and Pointer Generator Network to handle both simple and complex queries.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Virtual assistants such as Amazon Alexa, Apple Siri, and Google Assistant
often rely on a semantic parsing component to understand which action(s) to
execute for an utterance spoken by its users. Traditionally, rule-based or
statistical slot-filling systems have been used to parse "simple" queries; that
is, queries that contain a single action and can be decomposed into a set of
non-overlapping entities. More recently, shift-reduce parsers have been
proposed to process more complex utterances. These methods, while powerful,
impose specific limitations on the type of queries that can be parsed; namely,
they require a query to be representable as a parse tree.
In this work, we propose a unified architecture based on Sequence to Sequence
models and Pointer Generator Network to handle both simple and complex queries.
Unlike other works, our approach does not impose any restriction on the
semantic parse schema. Furthermore, experiments show that it achieves state of
the art performance on three publicly available datasets (ATIS, SNIPS, Facebook
TOP), relatively improving between 3.3% and 7.7% in exact match accuracy over
previous systems. Finally, we show the effectiveness of our approach on two
internal datasets.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.