Being accountable to any test result is being accountable to the wrong thing. Right now, the most important test in the world is for the Coronavirus. The information it provides is immensely useful, and yet to treat that information as more than information about the presence or absence of the virus is a mistake.
Neither outcome tells us anything about a person’s overall health. Neither outcome signals anything about what has happened or what will happen. And both outcomes come with a caveat—there is a small possibility of the result being wrong, of suggesting you have it when you don’t, or that you don’t when you do. To treat either outcome as more than it is absent contexts, details, and a whole lot of additional information renders any next step invalid, likely to be unhelpful, or even harmful.
All tests suffer from this limitation. It is a consequence of trying to squeeze as much precision as possible out of a single result, and the necessary price we pay for needing and trying to do so. More accurate results provide confidence that studies of the contexts, details, and any applicable information can be more expertly applied. But really, all any result does is move us a step or two away from chaos. It does not, as is so commonly and wrongly presumed, put us a step or two away from surety. And while that is still so much better than having no information at all, it is no more than one piece of a much larger puzzle.
What would be terrible for all of us is a lockstep approach that failed to consider context, that applied a generic solution to a result, or that refused to consider the unique conditions of an individual. Medicine would be reduced to a simple decision tree and we would be infinitely worse off than we are. It would be like thinking we’re through with a puzzle after the first two pieces come together.
Educational testing based on a specific methodology—the variety used in state testing programs, or the norm-referenced tests sold commercially, such as the Iowa Test of Basic Skills, or NWEA’s MAP—is now guilty of encouraging that exact sort of behavior. These too are tests that produce a narrow result that move us a step or two from chaos but no further. The results are nothing more than points on a continuum (some of which will be wrong) based on a moment in time that lacks context, cause, or professional interpretation. Yet to sell more product or to support bad educational policy, the declaration gets made that the results are more than they are, that they can directly inform teaching and learning, indicate quality or effectiveness, and replace professionalism.
This is as false and misleading and harmful as thinking that a diagnosis equates with a solution. All test results require interpretation through the broader technical lens of a professional equipped with the full context of the individual’s situation and current best practices. And they require the ability to question that lens, to recognize it as always incomplete and able to be improved upon. Only then is the professional capable of determining an optimal path forward for that student or patient while at the same time being responsible for making that path better for the next time.
I used to be kinder to the test publishing world—especially when I was in it and it was paying my bills and I still believed we were capable of staying within the limitations of what a test is—but the field has strayed way too far from its usefulness of putting tools in the hands of a researcher and instead has become something else altogether.
We would never tolerate straying so far from what a thing is in the tools that will help us through the pandemic because the consequences would be unthinkable. We shouldn’t tolerate it in the education of our nation’s children for the exact same reason.