Abstract: The non-autoregressive models have boosted the efficiency of neural machine
translation through parallelized decoding at the cost of effectiveness when
comparing with the autoregressive counterparts. In this paper, we claim that
the syntactic and semantic structures among natural language are critical for
non-autoregressive machine translation and can further improve the performance.
However, these structures are rarely considered in the existing
non-autoregressive models. Inspired by this intuition, we propose to
incorporate the explicit syntactic and semantic structures of languages into a
non-autoregressive Transformer, for the task of neural machine translation.
Moreover, we also consider the intermediate latent alignment within target
sentences to better learn the long-term token dependencies. Experimental
results on two real-world datasets (i.e., WMT14 En-De and WMT16 En-Ro) show
that our model achieves a significantly faster speed, as well as keeps the
translation quality when compared with several state-of-the-art