DESE Confirms SBAC Test Not Ready
Share and Enjoy
A recent memo from DESE confirmed that the computer adaptive SBAC test is not ready for implementation this year. Instead, local districts will be giving a “fixed form” computer test, meaning that every child will be answering the same multiple choice questions. It does not mean that this test will be standardized to other states. This is not even the test that was used to set SBAC cut scores for proficiency further muddying the waters as to what the scores will mean. The memo is an acknowledgement by DESE that districts are not ready, with technology and bandwidth, to give the full SBAC test that failed so miserably in piloting last year.
There has been some confusion about what test the state is giving this spring. In the progressive tradition of “change the name to confuse those who don’t like it” some districts have been telling parents that they are giving the MAP test this year, not SBAC.
We should be clear. SBAC is not a type of test. It does not refer to a computer delivered or even to the computer adaptive aspect of the test. It is about the test items and what standards they are aligned to. The DESE memo is clear on this point. “The fixed form assessments will align with Missouri Learning Standards [which currently are common core]. Test items will be Smarter Balanced items and will be designed according to the Smarter Balanced test blueprint.” So we will be giving the SBAC test this spring, just not the version with all the bells and whistles.
The memo goes on to reassure districts that Missouri remains a part of the Smarter Balanced Assessment Consortia. That’s too bad because that is a shrinking pool. According to EdWeek, more than half the country now is administering tests other than the two created by the consortias (PARCC and SBAC). Only 18 states remain in the SBAC pool. Governor Scott Walker of Wisconsin is the latest ask the legislature to withdraw from SBAC.
One commenter to the article, who says he is a former testing director for KY and ID, claimed that parents he talked to want to know how their child’s school is doing compared to children in other states. That is the excuse given by the Governors like Huckabee, to justify this whole system in the first place. If that is in any way Missouri’s excuse for staying in SBAC, those parents will have a limited number of states they can compare to. And of course we have no idea what question this test developer asked parents or what exactly their answers were. We only have his anecdotal proof that this is what parents want.
If we are going to consider anecdotal evidence, here is what I find. Parents don’t care how some district in another state did on a standardized test. Not really. They are not given that information. They only see what percentage of kids did better or worse than their kid nationally, within the state and maybe compared to other students in their school district. What they are looking for is some assurance that their kid is normal. What they would like is some outside concurrence that their kid is exceptional which all parents think their kids are. When parents brag how young their child was when he first walked, or read, or recited a poem most older parents smile politely letting the younger parents have their fantasy that these are signs of great things to come from their child. The older parents know that early markers like those mean little in the long run because so much can change in those first 16 years. How much more meaningful is a single snapshot of a child’s ability to answer a standardized test at 8 years old? Not much. Young parents want to believe that these tests tell them their children are exceptional and that is the problem with the use of these tests.
What does your school say the purpose of these tests is? What does NGA and CCSSO says the purpose of the tests is? What does the USDoED say the purpose is? It is to give the district information about their curricular choices and the quality of their teachers. Student scores are used to judge districts‘ AYP. The tests aren’t really designed to tell you how brilliant your child is. And if the tests were simply a measure of the district’s choices, then any children from the same family being taught by the same teachers should score almost the same on the tests. This is not the case, so how can the test only be a measure of teacher quality or curricular choices?
W. James Popham, a respected testing expert said in an interview with Wendy Lecker “scale scores have essentially no diagnostic value whatever to teachers, parents or students.” In an NPR interview he warned about the danger of focusing so heavily on standardized tests.
To me, one of the most frightening things about the preoccupation of raising test scores is the message it sends to children about what’s important in school. Rather than trying to make the classroom a learning environment where exciting new things are required, the classroom becomes a drill factory, where relentless pressure, practice on test items, may raise test scores — but may end up having children hate school.
The tests only sample small subset of standards, use questions designed to trick 50% of the students into answering incorrectly and seek to differentiate students, not indicate how much they have learned. According to Popham, “Some studies suggest that fully 75 percent of what is on a test is not even supposed to be covered in a particular school.” For the tests to be used as they are designed, parents should only ever see comparisons of their district and other similar districts. We know the tests are a far better measure of poverty than teaching quality, so parents should see how students from similar socio-economic districts are doing. That would be a meaningful comparison.
That is, unfortunately, not how the tests or scores are used. They compare students and districts from all societal strata. By design they differentiate students and then we are supposed to get outraged when everyone doesn’t achieve the same results on the test. We are told that our district is failing, that our teachers are no good. The obscene expectation, that Congress seems determined to hold onto in the new ESEA, is that some day, if we spend enough money and tighten the screws down on our teachers enough to get these scores up, that a test designed to differentiate student or district performance will show uniform education performance. It is insanity. Yet a number of companies will have a never ending pool of customers willing to try anything to achieve this result.
I can’t help but comment on the uselessness of this information given the current structure of education design and delivery. Even if you were to discover that your district is not doing well, show me where parents have any meaningful say in teacher selection/retention, curriculum or standards. The three minute school board meeting rule has pretty much destroyed that. The MSBA training of school board members has pretty much destroyed that. Having a number that grades a system you have no chance of changing is not going to make parents happy. It will make them angry and frustrated, and they will take that out on their only available target, the school administration. To the extent that their local school board aligns with the administration, they will be another target. The only lever left to them is to vote down funding a system that appears not very good that they have no ability to change.
The National Governors Association started this mess complaining that they had no way to compare their state’s education system to another’s. If they use a single set of standards with a standardized test, all they have achieved is comparability. If the standards or the test are crappy, they will only be able to judge who is doing the best with a crappy set of tools. We are going to spend millions pursuing this rather useless goal.