TAPIR: Learning Adaptive Revision for Incremental Natural Language
Understanding with a Two-Pass Model
- URL: http://arxiv.org/abs/2305.10845v1
- Date: Thu, 18 May 2023 09:58:19 GMT
- Title: TAPIR: Learning Adaptive Revision for Incremental Natural Language
Understanding with a Two-Pass Model
- Authors: Patrick Kahardipraja, Brielen Madureira, David Schlangen
- Abstract summary: Recent neural network-based approaches for incremental processing mainly use RNNs or Transformers.
A restart-incremental interface that repeatedly passes longer input prefixes can be used to obtain partial outputs, while providing the ability to revise.
We propose the Two-pass model for AdaPtIve Revision (TAPIR) and introduce a method to obtain an incremental supervision signal for learning an adaptive revision policy.
- Score: 14.846377138993645
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Language is by its very nature incremental in how it is produced and
processed. This property can be exploited by NLP systems to produce fast
responses, which has been shown to be beneficial for real-time interactive
applications. Recent neural network-based approaches for incremental processing
mainly use RNNs or Transformers. RNNs are fast but monotonic (cannot correct
earlier output, which can be necessary in incremental processing).
Transformers, on the other hand, consume whole sequences, and hence are by
nature non-incremental. A restart-incremental interface that repeatedly passes
longer input prefixes can be used to obtain partial outputs, while providing
the ability to revise. However, this method becomes costly as the sentence
grows longer. In this work, we propose the Two-pass model for AdaPtIve Revision
(TAPIR) and introduce a method to obtain an incremental supervision signal for
learning an adaptive revision policy. Experimental results on sequence
labelling show that our model has better incremental performance and faster
inference speed compared to restart-incremental Transformers, while showing
little degradation on full sequences.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.