Abstract: Aspect-based Sentiment Analysis (ABSA), aiming at predicting the polarities
for aspects, is a fine-grained task in the field of sentiment analysis.
Previous work showed syntactic information, e.g. dependency trees, can
effectively improve the ABSA performance. Recently, pre-trained models (PTMs)
also have shown their effectiveness on ABSA. Therefore, the question naturally
arises whether PTMs contain sufficient syntactic information for ABSA so that
we can obtain a good ABSA model only based on PTMs. In this paper, we firstly
compare the induced trees from PTMs and the dependency parsing trees on several
popular models for the ABSA task, showing that the induced tree from fine-tuned
RoBERTa (FT-RoBERTa) outperforms the parser-provided tree. The further analysis
experiments reveal that the FT-RoBERTa Induced Tree is more
sentiment-word-oriented and could benefit the ABSA task. The experiments also
show that the pure RoBERTa-based model can outperform or approximate to the
previous SOTA performances on six datasets across four languages since it
implicitly incorporates the task-oriented syntactic information.