Validity of Reading Comprehension Exams

The content of a reading comprehension exam is not necessarily covered by a particular curriculum. Passages are usually provided and these are followed by questions that attempt to assess a student's understanding of the material. The exam is basically testing a student's reading skills. Unfortunately, the performance in such an exam is not independent of the student's experiences, background information, and interests.

For exams tailored by the teachers themselves, it is only proper that the tests reflect the topics covered inside the classrooms. This is not wrong. One must in fact teach to the test since the test is the content of the subject students are studying. "Teaching to the test" sounds awful to many. It is, if the test is a reading comprehension exam. Topics covered in reading comprehension tests can encompass the various disciplines of science, history, classical and contemporary literature, social studies, music, and the arts. Of course, these are the same courses in schools that benefit from reading skills. Teaching for the purpose of a reading comprehension exam steals time away from what students should be learning in the classroom. Thus, placing great emphasis on standardized reading comprehension tests takes over the classroom sacrificing science, literature, history, arts, music, social studies, and even physical education. This is one reason why standardized tests which include reading comprehension must not be high stakes. It must not have serious consequences on both student and teacher. Otherwise, learning in the classroom is compromised.

Maureen Downey wrote an insightful article, "Testing season revs up: March madness leads to April angst" on her blog Get Schooled. She showed the following photograph:

Above photo copied from
"Testing season revs up: March madness leads to April angst"
And her thoughts were:
This particular Georgia Performance Standard aligns with research in English Language Arts. Students demonstrate more sophisticated comprehension of text, more motivation to read, and a broader and deeper knowledge of content when they use prior knowledge (memories) and personal experiences to make sense of the text and relate it to new information. 
But here comes the contradiction. During March Madness, signs like this are posted on school walls. 
In other words, students as young as 8 years old are taught to be perceptive, connection-making readers the first part of the year. Then those same students are told not to use the very skills and practices their teachers have taught them “good readers use” so they can pass a test.
E.D. Hirsch, Jr. also points out that research shows that reading comprehension exams are really not valid. In his blog article, "The Test of the Common Core" in the Huffington Post, Hirsch writes:
The scholarly proponents of the value-added approach have sent me a set of technical studies. My analysis of them showed what anyone immersed in reading research would have predicted: The value-added data are modestly stable for math, but are fuzzy and unreliable for reading. It cannot be otherwise, because of the underlying realities. Math tests are based on the school curriculum. What a teacher does in the math classroom affects student test scores. But reading-comprehension tests are not based on the school curriculum. (How could they be if there's no set curriculum?) Rather, they are based on the general knowledge that students have gained over their life span from all sources -- most of them outside the school. That's why reading tests in the early grades are so reliably and unfairly correlated with parental education and income.

Since the results on reading-comprehension tests are not chiefly based on what a teacher has done in a single school year, why would any sensible person try to judge teacher effectiveness by changes in reading comprehension scores in a single year? The whole project is unfair to teachers, ill-conceived, and educationally disastrous. The teacher-rating scheme has usurped huge amounts of teaching time in anxious test-prep. Paradoxically, the evidence shows that test-prep ceases to be effective after about six lessons. So most of that test-prep time is wasted even as test prep. It's time in which teachers could be calmly pursuing real education -- teaching students fascinating subjects in literature, history, civics, science and the arts, the general knowledge that is the true foundation of improved reading comprehension.
The validity of reading comprehension exams is likewise questioned in China. An article published in the journal Asian Social Science (Vol. 6, No. 12; December 2010, 192-194), "On Reading Tests and Its Validity", authored by Chao Chen from the Foreign Languages School, Qingdao University of Science and Technology, provides seven suggestions on how to write better reading comprehension tests:

  1. Keep specifications constantly in mind and try to select as representative a sample as possible. Do not repeatedly select texts of a particular kind simply because they are readily available.
  2. In order to get acceptable reliability, include as many passages as possible in a test, thereby giving candidates a good number of fresh starts. Considerations of practicality will inevitably impose constraints on this, especially where scanning or skimming is to be tested.
  3. In order to test scanning, look for passages, which contain plenty of discrete pieces of information.
  4. Choose tests which will interest candidates but which will not overexcite or disturb them.
  5. Avoid texts made up of information that may be part of candidates’ general knowledge. It may be difficult not to write items to which correct responses are available to some candidates without reading the passage.
  6. Assuming that it is only reading ability that is being tested, do not choose texts that are too culturally laden.
  7. Do not use texts that students have already read (or even close approximations to them).

Reading carefully each of the above suggestions leads me to one conclusion. Writing a reading comprehension is perhaps an exercise in futility. We therefore should not be using such an exam as a basis when making decisions on school closings, teachers' bonuses, and students' retention.