Abstract: Conventional tokenization methods for Chinese pretrained language models
(PLMs) treat each character as an indivisible token (Devlin et al., 2019),
which ignores the characteristics of the Chinese writing system. In this work,
we comprehensively study the influences of three main factors on the Chinese
tokenization for PLM: pronunciation, glyph (i.e., shape), and word boundary.
Correspondingly, we propose three kinds of tokenizers: 1) SHUOWEN (meaning Talk
Word), the pronunciation-based tokenizers; 2) JIEZI (meaning Solve Character),
the glyph-based tokenizers; 3) Word segmented tokenizers, the tokenizers with
Chinese word segmentation. To empirically compare the effectiveness of studied
tokenizers, we pretrain BERT-style language models with them and evaluate the
models on various downstream NLU tasks. We find that SHUOWEN and JIEZI
tokenizers can generally outperform conventional single-character tokenizers,
while Chinese word segmentation shows no benefit as a preprocessing step.
Moreover, the proposed SHUOWEN and JIEZI tokenizers exhibit significantly
better robustness in handling noisy texts. The code and pretrained models will
be publicly released to facilitate linguistically informed Chinese NLP.