Memory-Based Semantic Parsing
- URL: http://arxiv.org/abs/2110.07358v1
- Date: Tue, 7 Sep 2021 16:15:13 GMT
- Title: Memory-Based Semantic Parsing
- Authors: Parag Jain, Mirella Lapata
- Abstract summary: We present a memory-based model for context-dependent semantic parsing.
We learn a context memory controller that manages the memory by maintaining the cumulative meaning of sequential user utterances.
- Score: 79.48882899104997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a memory-based model for context-dependent semantic parsing.
Previous approaches focus on enabling the decoder to copy or modify the parse
from the previous utterance, assuming there is a dependency between the current
and previous parses. In this work, we propose to represent contextual
information using an external memory. We learn a context memory controller that
manages the memory by maintaining the cumulative meaning of sequential user
utterances. We evaluate our approach on three semantic parsing benchmarks.
Experimental results show that our model can better process context-dependent
information and demonstrates improved performance without using task-specific
decoders.
Related papers
- Improving Image Recognition by Retrieving from Web-Scale Image-Text Data [68.63453336523318]
We introduce an attention-based memory module, which learns the importance of each retrieved example from the memory.
Compared to existing approaches, our method removes the influence of the irrelevant retrieved examples, and retains those that are beneficial to the input query.
We show that it achieves state-of-the-art accuracies in ImageNet-LT, Places-LT and Webvision datasets.
arXiv Detail & Related papers (2023-04-11T12:12:05Z) - LaMemo: Language Modeling with Look-Ahead Memory [50.6248714811912]
We propose Look-Ahead Memory (LaMemo) that enhances the recurrence memory by incrementally attending to the right-side tokens.
LaMemo embraces bi-directional attention and segment recurrence with an additional overhead only linearly proportional to the memory length.
Experiments on widely used language modeling benchmarks demonstrate its superiority over the baselines equipped with different types of memory.
arXiv Detail & Related papers (2022-04-15T06:11:25Z) - Pin the Memory: Learning to Generalize Semantic Segmentation [68.367763672095]
We present a novel memory-guided domain generalization method for semantic segmentation based on meta-learning framework.
Our method abstracts the conceptual knowledge of semantic classes into categorical memory which is constant beyond the domains.
arXiv Detail & Related papers (2022-04-07T17:34:01Z) - Memory Wrap: a Data-Efficient and Interpretable Extension to Image
Classification Models [9.848884631714451]
Memory Wrap is a plug-and-play extension to any image classification model.
It improves both data-efficiency and model interpretability, adopting a content-attention mechanism.
We show that Memory Wrap outperforms standard classifiers when it learns from a limited set of data.
arXiv Detail & Related papers (2021-06-01T07:24:19Z) - Kanerva++: extending The Kanerva Machine with differentiable, locally
block allocated latent memory [75.65949969000596]
Episodic and semantic memory are critical components of the human memory model.
We develop a new principled Bayesian memory allocation scheme that bridges the gap between episodic and semantic memory.
We demonstrate that this allocation scheme improves performance in memory conditional image generation.
arXiv Detail & Related papers (2021-02-20T18:40:40Z) - Learning to Learn Variational Semantic Memory [132.39737669936125]
We introduce variational semantic memory into meta-learning to acquire long-term knowledge for few-shot learning.
The semantic memory is grown from scratch and gradually consolidated by absorbing information from tasks it experiences.
We formulate memory recall as the variational inference of a latent memory variable from addressed contents.
arXiv Detail & Related papers (2020-10-20T15:05:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.