Matt Townsley posed a question to me and blogger Chris Liebig on Twitter last weekend: What are your thoughts about state assessments that are norm referenced versus criterion referenced?
My first thought: wow, that’s too much question to respond to 140 characters at a time.
My next thought: there’s a really good question in there, the answer to which ought to be central to every conversation we have around state (accountability) assessments but won’t be. Because the answer to that questions depends upon the purpose of the state assessment (how’s that for a foreseeable lawyerly answer). Why are we administering them? What question are we trying to answer?
When we know what the question is, we will know whether norm referenced or criterion referenced is the better choice.
It seems to me though, that we have a very real problem with a while-we’re-at-it mentality when it comes to state assessments. If we are going to be testing for student proficiency, we might as well get a growth measure while we are at it. And results we can use to evaluate teachers. And results we can use to inform instruction. And results we can use to identify gifted students. And results we can use to determine if students are ready for college, without remediation. And results that tell us how our students are doing compared to students in other schools, states, and countries. And the tests shouldn’t just measure, but should help students learn. And so on. And the longer the assessments, the more it seems to make sense to do all of these things while we are at it.
However, it is my understanding that assessments should be designed and used for a single purpose. And if that single purpose is to determine proficiency, I think a criterion referenced assessment would make sense.
This is all assuming, of course, that the standards make sense in terms of reasonable grade level expectations which is, really, an enormous conversation in and of itself: what is grade level and how much of it do you have to be able to do to be proficient?
I have no idea what the specific answers to those questions are, by the way (and I like to think that I have been paying attention), which may be why it seems that more than knowing that our students are proficient as measured against the standards, we (a general we, not necessarily me or Matt) want to know that our students are performing at a higher level than students in other schools, other states, and/or other countries. If we aren’t sure what grade level proficient means (or that the bar is set high enough), we can at least take comfort in our students ranking higher than others–at least as long as our students aren’t the ones ranked at the bottom, of course. Hence all the anxiety about global competitiveness and wanting to take the same assessments as other states so that we can directly compare scores. In which case, a proficiency cut score on a norm referenced assessment might make more sense.
That being said, someone needs to do a better job limiting the use of state assessment scores to the purpose for which the state assessments have been designed.