Abstract: Current pre-trained language models have lots of knowledge, but a more
limited ability to use that knowledge. Bloom's Taxonomy helps educators teach
children how to use knowledge by categorizing comprehension skills, so we use
it to analyze and improve the comprehension skills of large pre-trained
language models. Our experiments focus on zero-shot question answering, using
the taxonomy to provide proximal context that helps the model answer questions
by being relevant to those questions. We show targeting context in this manner
improves performance across 4 popular common sense question answer datasets.