What Do Exams Really Tell Us
I took the Graduate Record Examination (GRE) in Chemistry in December of 1986. I did exceptionally well and scored at 99 percentile range. Did it mean that I was among the top students entering graduate school in chemistry in the US during that year? I probably was in terms of knowing what was on the specific GRE chemistry test that I took. About 2 decades later, I served for six years at the Educational Testing Service GRE Chemistry Committee. During this time, I was among a committee of eight people tasked to write, review and approve GRE chemistry exams. Being on the other side of the fence allowed me to see what these exams can and cannot tell. A standardized test can point to deficiencies, but it certainly cannot be used to rank good, better and best.
A report of my GRE score in Chemistry |
Standardized tests are useful because these exams allow for assessing students who come from different schools from different countries. These tests, like the GRE chemistry that I took, can still provide some type of ranking. The score I got, 940, placed me in the top 1% of the test takers. Standardized tests are not easy to compose. In the chemistry test, for example, there are equaters (special questions that allow a direct comparison between different versions of the test). Thus, the 99% ranking that I got took into account not only the students who used the specific version I took, but all versions of the GRE chemistry test. That meant I was among the top 1%, not only from those who took the test in 1986, but from all who had taken the GRE chemistry exam. I have to make it clear though that this ranking strictly applies only to the GRE chemistry exam and not on how much chemistry I really know or can do.
Exams are limited in the information they could provide. Even tests tailored to a specific class could not possibly cover everything that was taught. There is usually a time limit even for exams written by an instructor in a class. This time is exceedingly short compared to the time spent for instruction thus it is obvious that the exam cannot be truly comprehensive. Standardized exams are even less comprehensive. Thus, albeit deficiency can be easily gauged, there is great uncertainty in measuring proficiency.
In a previous post of a similar title, "What Does an Exam Tell Us", specific questions given in a standardized exam (TIMSS math) were examined. One should see that such study goes deeper into what an exam is informing us since it actually looks at the questions. It is very specific. It provides us exactly what question students are able or not able to answer correctly. Specific information is important but when looking at general trends, the task becomes impossible when one gets drowned with the details. Thus, general trends are useful as long as we keep in mind what the limitations are.
In "Factors Affecting Learning Outcomes", a correlation between scores in standardized exams (NAEP) and the educational level of parents is highlighted. Such valuable information is made possible only by using a standardized exam. And the information obtained is valid. Students whose parents have higher educational attainment are scoring higher in the NAEP exam. No one can really argue against that. It is simply a restatement of the test results.
The National Center for Education Statistics (NCES) has taken the analysis of standardized exams a step further. In order to compare the US against other countries, a link is made between the US NAEP exams and the international TIMSS. To relate one test to another is not an easy task so the NCES had to the following:
Above copied from US States in a Global Context |
Only the top performing Asian nations fare better than Massachusetts, for example. The above, in my opinion, provides useful information. The link between NAEP and TIMSS is also not far-fetched since the two exams are actually similar.
Recently, to prove the point that US schools are really worse than those in other countries, researchers from Harvard published a report entitled "U.S. Students from Educated Families Lag in International Tests". In their conclusion, the authors wrote:
"Lacking good information, it has been easy even for sophisticated Americans to be seduced by apologists who would have the public believe the problems are simply those of poor kids in central city schools. Our results point in quite the opposite direction. We find that the international rankings of the United States and the individual states are not much different for students from advantaged backgrounds than for those from disadvantaged ones. Although a higher proportion of U.S. students from better-educated families are proficient, that is equally true for similarly situated students in other countries. Compared to their counterparts abroad, however, U.S. students from advantaged homes lag severely behind.
As long as the focus remains on distinctions within the United States, then the comfortable can remain comforted by the distance between suburbia and the inner city. But once the focus shifts to countries abroad and fair, apples-to-apples comparisons are made, it becomes manifest that nearly all of our young people—from privileged and not-so-privileged backgrounds—are not faring well.
Some say that we must cure poverty before we can address the achievement problems in our schools. Others say that our schools are generally doing fine, except for the schools serving the poor. Bringing an international perspective correctly to bear on the issue dispels both thoughts.
The United States has two achievement gaps to be bridged—the one between the advantaged and the disadvantaged and the one between itself and its peers abroad. Neither goal need be sacrificed to attain the other."
Whether the above methodology is valid or not is unfortunately irrelevant. The comparison is invalid because of one important reason: PISA is really different from either TIMSS or NAEP. One simply has to look at the following data from Finland, the top performing country in PISA math 2003 (second only to Hong Kong), to see that PISA math is testing something different:
Above copied from THE TEACHING OF MATHEMATICS2009, Vol. XII, 2, pp. 51–56 |
PISA is not aligned with the mathematics curriculum in the United States. One should therefore not use PISA as a measure of proficiency in math among high school students in the US.
The gap between rich and poor in education in the US and in other countries is real. It is the most important observation. It is unfortunate that some research continues to cloud the issue by offering a misguided analysis of standardized test scores.
By the way, the Khan academy does teach why the sum of the degrees of the interior angles in a triangle is 180. You can view the lesson here.
Exams lay a standard of what is our percentile level to grow further. Global career and fabulous results thereby are only possible through Mobile Bar Review Courses for entrance exam. Actually my senior got her desired score and took admission in her dream university also. Thinking to get along with these tutorials for our exam also.
ReplyDelete