Adaptive Tests
Writing an exam for a class needs to be thoughtful. First, we would like the test to be precise in measuring learning outcomes. A student's score after all can be easily influenced by how the student feels about the exam. Both motivation and engagement in an assessment are important in order to gauge properly learning outcomes. Of course, in my class at Georgetown, most students take exams very seriously. And in one exam I recently gave, a student even realized that the questions were progressing from one to the next. It was a series of questions regarding how to arrange atoms in an octahedral arragement.
First, I asked how many ways can we arrange two "three of a kind" in an octahedral fashion. There are two:
Two ways of arranging two "three of a kind (three shaded and three unshaded) in an octahedral geometry |
Two ways of arranging three symmetric, paired and linked objects in an octahedral geometry |
Four ways of arranging three paired, linked, but asymmetric objects in an octahedral geometry |
Even if you cannot follow the chemistry, it may be inferred that the first question is probing whether a student knows how to draw geometric isomers while the second one focuses on enantiomers, and the last one is a combination. How a student responds to this question can inform a teacher of what the student currently understands. The set of questions is complete in terms of testing this concept. It is not repetitive, and is placed in the right order.
One thing this set of questions does not do, however, is to adjust to the student's responses. For example, if a student cannot answer correctly either one of the first two questions, there is really no point in asking the third one. This cannot be easily done with "pen and paper" exams, but can be easily implemented in a computer-based exam in which a student's progress is monitored and the flow of questions is then tailored to how the student is doing. This is called computer-adaptive testing.
Recent research has shown that computer-adaptive testing comes with significant benefits. In a paper scheduled to be published in the Journal of Educational Psychology, Martin and Lazendic report:
These findings (a) confirm that computer-adaptive testing yields greater achievement measurement precision, (b) suggest some positive test-relevant motivation and engagement effects from computer-adaptive testing, (c) counter claims that computer-adaptive testing reduces students’ test-relevant motivation, engagement, and subjective experience, and (d) suggest positive computer-adaptive testing effects for older students at a developmental stage when they are typically less motivated and engaged.Students in public schools in the state of Virginia are currently preparing for their Standards of Learning (SOL) exams. It is a good thing that some of the SOL exams are now computer-adaptive. When students are motivated and engaged, these assessments can become more meaningful.
Comments
Post a Comment