Abstract: Vision-and-Language Navigation wayfinding agents can be enhanced by
exploiting automatically generated navigation instructions. However, existing
instruction generators have not been comprehensively evaluated, and the
automatic evaluation metrics used to develop them have not been validated.
Using human wayfinders, we show that these generators perform on par with or
only slightly better than a template-based generator and far worse than human
instructors. Furthermore, we discover that BLEU, ROUGE, METEOR and CIDEr are
ineffective for evaluating grounded navigation instructions. To improve
instruction evaluation, we propose an instruction-trajectory compatibility
model that operates without reference instructions. Our model shows the highest
correlation with human wayfinding outcomes when scoring individual
instructions. For ranking instruction generation systems, if reference
instructions are available we recommend using SPICE.