Here is a recent piece that Gerald Bracey did for NAASP:
OH, THOSE NAEP ACHIEVEMENT LEVELS
Gerald W. Bracey
The 2005 NAEP results will arrive shortly and more tongues will cluck about them this time than in the past. That’s because some reformers have made the NAEP achievement levels—basic, proficient, and advanced--more prominent by calling for them to be used to validate state achievement results reported for NCLB. Such use would be a disaster. The NAEP achievement levels are “fundamentally flawed” to use the words of the National Academy of Sciences (NAS).
If fact, everyone who has studied the NAEP achievement levels has said, essentially, “These things are no damn good.” Those who have studied them include the NAS, the Government Accounting Office, the Center for Research in Evaluation, Standards and Student Testing, and the National Academy of Education. Even the NAEP reports themselves contain a disclaimer quoting from the NAS study: “NAEP’s current achievement level setting procedures remain fundamentally flawed. The judgment tasks are difficult and confusing; raters’ judgments of different item types are internally inconsistent; appropriate validity evidence for the cut scores is lacking; and the process has produced unreasonable results.”
Fundamentally flawed? Judgments inconsistent? Validity evidence lacking? Can you imagine the howls of outrage that would greet ETS or CTB/McGraw-Hill if they dared bring to market an instrument with such basic failures?
So why are we still using the achievement levels? The official story from the U. S. Department of Education is that “a proven alternative to the current process has not yet been identified.” That was written in 1998. One would think that a Department as obsessed with applying “scientifically based research” as the current one would have screamed in horror at the flawed achievement levels and rushed to fix them.
The truth is, though, neither the Department nor anyone else is trying to develop a “proven alternative.” Indeed, many observers believe that the NAEP achievement levels, created by the National Assessment Governing Board under its then-president Chester Finn, were deliberately set too high in order to sustain the sense of crisis created by 1983’s “A Nation At Risk.” There is no rush to develop new achievement level setting procedures because much political hay can be made by alleging that American students are performing poorly.
Here’s what NAS meant by “unreasonable results:” The NAEP achievement level results do not accord with any other performance evaluations, especially results from international comparisons. For example, in the 1996 NAEP science assessment, only 30% of American 4th graders scored proficient or better. In that same year, though, the Third International Mathematics and Science Study found American 4th graders third in the world in science among 26 nations. Such “unreasonable results” as the NAEP-international comparisons discrepancy consistently appear.
The 2005 results will get more attention than usual because of No Child Left Behind (NCLB). Currently, under NCLB, each state uniquely defines “proficient.” To some, this creates a Babel of incomparable results. We need a common yardstick, they say, that will let us compare states. NCLB made state-level NAEP mandatory and it ever so conveniently has an achievement level named “proficient.”
Rod Paige said he would use the NAEP achievement levels to “shame” states into doing better. Others, such as the American Enterprise Institute’s Frederick Hess and Finn have proposed that NAEP be the NCLB test used to evaluate schools in reading and math. And Hess and Paul Peterson of Harvard have developed a procedure to grade all states based on the discrepancy between the percent proficient on the state test and the percent proficient on NAEP.
Few states do well by Peterson-Hess. Only 5 get A’s and only 2 get B’s. The scale doesn’t strike me as particularly sophisticated or accurate. For instance, South Carolina gets an A because there’s little difference between the state test and NAEP: The state is low on both. Connecticut, on the other hand, gets a C-. Yet Connecticut has the nation’s highest proportion of students proficient on NAEP reading. The “Texas Miracle,” on the other hand, disappears. Texas gets an F because it claims that 87 percent of its 4th graders are proficient in reading while NAEP says only 33 percent.
But you can rest assured that when the NAEP results appear, school critics and reporters both will point to the NAEP-state discrepancies and imply that the state is lying about how well its kids are doing. In some quarters, it will be argued that the discrepancies mean we need vouchers and more charter schools.
No comments:
Post a Comment