Abstract: Manual evaluation is essential to judge progress on automatic text
summarization. However, we conduct a survey on recent summarization system
papers that reveals little agreement on how to perform such evaluation studies.
We conduct two evaluation experiments on two aspects of summaries' linguistic
quality (coherence and repetitiveness) to compare Likert-type and ranking
annotations and show that best choice of evaluation method can vary from one
aspect to another. In our survey, we also find that study parameters such as
the overall number of annotators and distribution of annotators to annotation
items are often not fully reported and that subsequent statistical analysis
ignores grouping factors arising from one annotator judging multiple summaries.
Using our evaluation experiments, we show that the total number of annotators
can have a strong impact on study power and that current statistical analysis
methods can inflate type I error rates up to eight-fold. In addition, we
highlight that for the purpose of system comparison the current practice of
eliciting multiple judgements per summary leads to less powerful and reliable
annotations given a fixed study budget.